modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
GillesMeyhi/whisper-tiny-minds14
|
GillesMeyhi
| 2023-09-09T15:32:21Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-08T13:49:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 33.47107438016529
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7299
- Wer Ortho: 34.1764
- Wer: 33.4711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0012 | 17.86 | 500 | 0.7299 | 34.1764 | 33.4711 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
deepachalapathi/without_questions
|
deepachalapathi
| 2023-09-09T15:28:19Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-09T15:27:36Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# whateverweird17/without_questions
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("whateverweird17/without_questions")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
sento800/distilbert-base-cased-squad
|
sento800
| 2023-09-09T15:24:52Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-01T18:05:20Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-cased-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-squad
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3677 | 1.0 | 1350 | 1.5690 |
| 1.2389 | 2.0 | 2700 | 1.3666 |
| 0.8202 | 3.0 | 4050 | 1.3394 |
| 0.5676 | 4.0 | 5400 | 1.5052 |
| 0.4022 | 5.0 | 6750 | 1.6366 |
| 0.305 | 6.0 | 8100 | 1.7423 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Terps/ppo-Huggy
|
Terps
| 2023-09-09T15:24:00Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-09T15:23:56Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Terps/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Chris-choi/xlm-roberta-base-finetuned-panx-all
|
Chris-choi
| 2023-09-09T15:06:33Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-09T14:50:45Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1738
- F1: 0.8542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.301 | 1.0 | 835 | 0.1789 | 0.8323 |
| 0.1567 | 2.0 | 1670 | 0.1684 | 0.8437 |
| 0.1025 | 3.0 | 2505 | 0.1738 | 0.8542 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.12.1+cu113
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Chris-choi/xlm-roberta-base-finetuned-panx-en
|
Chris-choi
| 2023-09-09T14:49:28Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-09T14:42:18Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6911349520045172
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3993
- F1: 0.6911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0952 | 1.0 | 50 | 0.5689 | 0.5558 |
| 0.4968 | 2.0 | 100 | 0.4343 | 0.6557 |
| 0.3427 | 3.0 | 150 | 0.3993 | 0.6911 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.12.1+cu113
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Ori/lama-2-13b-peft-strategyqa-no-retrieval-v2-seed-1
|
Ori
| 2023-09-09T14:45:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2023-09-09T14:43:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
trieudemo11/llama_7b_attrb_cate_big_l280_18
|
trieudemo11
| 2023-09-09T14:41:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-09T14:41:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
ipipan/herbert-base-qa-v1
|
ipipan
| 2023-09-09T14:29:35Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"pl",
"dataset:ipipan/polqa",
"dataset:ipipan/maupqa",
"arxiv:2305.05486",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-31T10:22:50Z |
---
datasets:
- ipipan/polqa
- ipipan/maupqa
language:
- pl
pipeline_tag: sentence-similarity
---
# HerBERT QA
HerBERT QA model encodes the Polish sentences or paragraphs into a 768-dimensional dense vector space and can be used for tasks like document retrieval or semantic search. See [the paper](https://arxiv.org/abs/2305.05486) for more details.
This model is deprecated. Please consider using the [Silver Retriever (v1)](https://huggingface.co/ipipan/silver-retriever-base-v1) for much better performance.
## Additional Information
### Model Creators
The was created by Piotr Rybak from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/).
This work was supported by the European Regional Development Fund as a part of 2014โ2020 Smart Growth Operational Programme, CLARIN โ Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{rybak-2023-maupqa,
title = "{MAUPQA}: Massive Automatically-created {P}olish Question Answering Dataset",
author = "Rybak, Piotr",
booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bsnlp-1.2",
pages = "11--16",
abstract = "Recently, open-domain question answering systems have begun to rely heavily on annotated datasets to train neural passage retrievers. However, manually annotating such datasets is both difficult and time-consuming, which limits their availability for less popular languages. In this work, we experiment with several methods for automatically collecting weakly labeled datasets and show how they affect the performance of the neural passage retrieval models. As a result of our work, we publish the MAUPQA dataset, consisting of nearly 400,000 question-passage pairs for Polish, as well as the HerBERT-QA neural retriever.",
}
```
|
rlewczuk/distilbert-base-uncased-finetuned-emotion
|
rlewczuk
| 2023-09-09T14:27:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-09T14:16:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9249250567487983
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2171
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8247 | 1.0 | 250 | 0.3085 | 0.908 | 0.9060 |
| 0.2455 | 2.0 | 500 | 0.2171 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.1.0.dev20230414+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
rrozb/rl_course_vizdoom_health_gathering_supreme
|
rrozb
| 2023-09-09T14:23:03Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-09T14:13:42Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 4.22 +/- 0.57
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r rrozb/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
squarelike/polyglot-ko-medical-5.8b
|
squarelike
| 2023-09-09T14:15:45Z | 263 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"medical",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-15T06:46:52Z |
---
language:
- ko
tags:
- pytorch
- causal-lm
- medical
license: apache-2.0
pipeline_tag: text-generation
---
[https://github.com/jwj7140/ko-medical-chat](https://github.com/jwj7140/ko-medical-chat)
# Polyglot-Ko-Medical-5.8b
polyglot-ko-medical์ [polyglot-ko](https://github.com/EleutherAI/polyglot)๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ์๋ฃ ๋ถ์ผ์ ํ๊ธ raw ๋ฐ์ดํฐ๋ฅผ ํ์ต์ํจ ๊ธฐ๋ฐ ๋ชจ๋ธ์
๋๋ค.
## ํ์ต ๋ฐ์ดํฐ
polyglot-ko-medical์ ์ฝ 420MB์ ์๋ฃ ๋ถ์ผ ํ๊ธ ๋ง๋ญ์น๋ก ํ์ต๋์์ต๋๋ค. ์ฃผ์ ๋ฐ์ดํฐ์
์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
| Source |Size (MB) | Link |
|----------------------------------|---------|------------------------------------------|
| AIHub ์๋ฃ, ๋ฒ๋ฅ ์ ๋ฌธ ์์ ๋ง๋ญ์น | 351.0 | aihub.or.kr |
| AIHub ์ ๋ฌธ๋ถ์ผ ํ์ ๋ง๋ญ์น | 63.4 | aihub.or.kr|
| ์ง๋ณ๊ด๋ฆฌ์ฒญ ๊ตญ๊ฐ๊ฑด๊ฐ์ ๋ณดํฌํธ | 8.33 | health.kdca.go.kr |
| ๋ณด๊ฑด๋ณต์ง๋ถ ๊ตญ๊ฐ์ ์ ๊ฑด๊ฐ์ ๋ณดํฌํธ | < 1.0 | mentalhealth.go.kr |
## ํ์ต
polyglot-ko-medical-5.8b๋ [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b)์์ qlora๋ก ์ถ๊ฐ ํ์ต๋์์ต๋๋ค.
- lora_alpha: 32
- lora_dropout: 0.05
- lora_r: 8
- target_modules: query_key_value
- epoch: 3
- learning_rate: 3e-4
|
haouarin/jais-13b-8bits
|
haouarin
| 2023-09-09T14:08:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jais",
"text-generation",
"custom_code",
"ar",
"autotrain_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-09-08T13:13:32Z |
---
language:
- ar
---
Demo google colab : https://colab.research.google.com/drive/1QLihIVHOnWrz5P7XER4mn13YuGAbnPDq?usp=sharing
|
Yntec/BasilRemix
|
Yntec
| 2023-09-09T14:08:34Z | 285 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"Anime",
"3D",
"Illustration",
"nuigurumi",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-08T02:14:20Z |
---
license: other
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- 3D
- Illustration
- nuigurumi
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# Basil Remix
BasilMix mixed with ReVAnimated v11 to bring its compositions back to life! It has the MoistMixV2VAE baked in.
Comparison:

(Click for larger)
Sample and prompt:

Pretty detailed CUTE Girl, Cartoon, sitting on a computer monitor, holding antique TV, DETAILED CHIBI EYES, gorgeous detailed hair, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and kyoani
Original page:
https://huggingface.co/nuigurumi/basil_mix
# Recipe
- SuperMerger Weight sum Train Difference Use MBW 0,1,1,1,1,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,1,1,1,1,1
Model A:
BasilMix
Model B:
ReVAnimated v11
Output Model:
BasilRemix
|
Sachin16/q-FrozenLake-v1-4x4-noSlippery
|
Sachin16
| 2023-09-09T13:59:21Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-09T13:59:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Sachin16/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jjluo/my_awesome_mingliangqiangu_model
|
jjluo
| 2023-09-09T13:50:28Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-09T13:27:22Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_mingliangqiangu_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mingliangqiangu_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1140
- Accuracy: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7575 | 0.99 | 67 | 1.3989 | 0.9287 |
| 0.4806 | 2.0 | 135 | 0.4502 | 0.9935 |
| 0.2902 | 2.99 | 202 | 0.2922 | 0.9944 |
| 0.2073 | 4.0 | 270 | 0.2118 | 0.9981 |
| 0.1975 | 4.99 | 337 | 0.1831 | 0.9963 |
| 0.1514 | 6.0 | 405 | 0.1576 | 0.9935 |
| 0.1282 | 6.99 | 472 | 0.1290 | 1.0 |
| 0.1224 | 8.0 | 540 | 0.1317 | 0.9963 |
| 0.1147 | 8.99 | 607 | 0.1127 | 1.0 |
| 0.1129 | 9.93 | 670 | 0.1140 | 0.9981 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
StefanoCaloni/PixelCopter
|
StefanoCaloni
| 2023-09-09T13:46:35Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-09T11:17:55Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 29.20 +/- 24.93
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Christiyke/roberta-base-Roberta-Model
|
Christiyke
| 2023-09-09T13:42:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-06T09:03:21Z |
---
license: mit
base_model: Roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-Roberta-Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-Roberta-Model
This model is a fine-tuned version of [Roberta-base](https://huggingface.co/Roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1276
- F1: 0.7654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5905 | 0.5 | 500 | 0.8005 | 0.7418 |
| 0.5825 | 1.0 | 1000 | 0.7042 | 0.7480 |
| 0.4843 | 1.5 | 1500 | 0.9599 | 0.7538 |
| 0.4913 | 2.0 | 2000 | 0.9035 | 0.7595 |
| 0.396 | 2.5 | 2500 | 0.8974 | 0.7607 |
| 0.398 | 3.0 | 3000 | 0.8997 | 0.7652 |
| 0.3065 | 3.5 | 3500 | 1.0698 | 0.7619 |
| 0.2987 | 4.0 | 4000 | 0.9735 | 0.7655 |
| 0.217 | 4.5 | 4500 | 1.1451 | 0.7560 |
| 0.237 | 5.0 | 5000 | 1.1276 | 0.7654 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Zeroxdesignart/Difu
|
Zeroxdesignart
| 2023-09-09T13:23:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-09T13:22:35Z |
import requests
API_URL = "https://api-inference.huggingface.co/models/stabilityai/stable-diffusion-xl-base-1.0"
headers = {"Authorization": f"Bearer {API_TOKEN}"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.content
image_bytes = query({
"inputs": "Astronaut riding a horse",
})
# You can access the image with PIL.Image for example
import io
from PIL import Image
image = Image.open(io.BytesIO(image_bytes))
|
liubomyrgavryliv/en_colorExtractor
|
liubomyrgavryliv
| 2023-09-09T13:20:03Z | 12 | 1 |
spacy
|
[
"spacy",
"token-classification",
"en",
"doi:10.57967/hf/2848",
"license:mit",
"model-index",
"region:us"
] |
token-classification
| 2023-09-07T10:53:06Z |
---
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_colorExtractor
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9655765921
- name: NER Recall
type: recall
value: 0.9705882353
- name: NER F Score
type: f_score
value: 0.9680759275
---
Model to extract color entities from chunks of notes
| Feature | Description |
| --- | --- |
| **Name** | `en_colorExtractor` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.6.1,<3.7.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Liubomyr Gavryliv](mineralogy.rocks) |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `COLOR` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 96.81 |
| `ENTS_P` | 96.56 |
| `ENTS_R` | 97.06 |
| `TOK2VEC_LOSS` | 1365.52 |
| `NER_LOSS` | 147789.01 |
|
ingeol/llama_qlora_test_trainversion2_3000
|
ingeol
| 2023-09-09T12:27:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-09T12:27:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
VietnamAIHub/Vietnamese_LLama2_13B_8K_SFT_General_Domain_Knowledge
|
VietnamAIHub
| 2023-09-09T12:25:05Z | 137 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-29T03:20:47Z |
# Vietnamese Llama2-13B 8k Context Length with LoRA Adapters
This repository contains a Llama-13B model fine-tuned with QLoRA (Quantization Low-Rank Adapter) adapters. The adapter is a plug-and-play tool that enables the LLaMa model to perform well in many Vietnamese NLP tasks.
Project Github page: [Github](https://github.com/VietnamAIHub/Vietnamese_LLMs)
## Model Overview
The Vietnamese Llama2-13b model is a large language model capable of generating meaningful text and can be used in a wide variety of natural language processing tasks, including text generation, sentiment analysis, and more. By using LoRA adapters, the model achieves better performance on low-resource tasks and demonstrates improved generalization.
## Dataset and Fine-Tuning
The LLaMa2 model was fine-tuned on over 200K Vietnamese instructions from various sources to improve its ability to understand and generate text for different tasks. The instruction dataset comprises data from the following sources:
Dataset link: Comming soon
## Testing the Model by yourself.
To load the fine-tuned Llama-13B model with LoRA adapters, follow the code snippet below:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name = "VietnamAIHub/Vietnamese_LLama2_13B_8K_SFT_General_Domain_Knowledge"
## Loading Base LLaMa model weight and Merge with Adapter Weight wiht the base model
m = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_8bit=True,
torch_dtype=torch.bfloat16,
pretraining_tp=1,
# use_auth_token=True,
# trust_remote_code=True,
cache_dir=cache_dir,
)
tok = AutoTokenizer.from_pretrained(
model_name,
cache_dir=cache_dir,
padding_side="right",
use_fast=False, # Fast tokenizer giving issues.
tokenizer_type='llama', #if 'llama' in args.model_name_or_path else None, # Needed for HF name change
use_auth_token=True,
)
tok.bos_token_id = 1
stop_token_ids = [0]
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
for stop_id in stop_token_ids:
if input_ids[0][-1] == stop_id:
return True
return False
generation_config = dict(
temperature=0.2,
top_k=20,
top_p=0.9,
do_sample=True,
num_beams=1,
repetition_penalty=1.2,
max_new_tokens=400,
early_stopping=True,
)
prompts_input="Cรกch ฤแป hแปc tแบญp vแป mแปt mรดn hแปc thแบญt tแปt"
system_prompt=f"<s>[INST] <<SYS>>\n You are a helpful assistant, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your \
answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure\
that your responses are socially unbiased and positive in nature.\
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not \
correct. If you don't know the answer to a question, please response as language model you are not able to respone detailed to these kind of question.\n<</SYS>>\n\n {prompts_input} [/INST] "
input_ids = tok(message, return_tensors="pt").input_ids
input_ids = input_ids.to(m.device)
stop = StopOnTokens()
streamer = TextIteratorStreamer(tok, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
# #print(tok.decode(output[0]))
generation_config = dict(
temperature=0.1,
top_k=30,
top_p=0.95,
do_sample=True,
# num_beams=1,
repetition_penalty=1.2,
max_new_tokens=2048, ## 8K
early_stopping=True,
stopping_criteria=StoppingCriteriaList([stop]),
)
inputs = tok(message,return_tensors="pt") #add_special_tokens=False ?
generation_output = m.generate(
input_ids = inputs["input_ids"].to(device),
attention_mask = inputs['attention_mask'].to(device),
eos_token_id=tok.eos_token_id,
pad_token_id=tok.pad_token_id,
**generation_config
)
generation_output_ = m.generate(input_ids = inputs["input_ids"].to(device), **generation_config)
s = generation_output[0]
output = tok.decode(s,skip_special_tokens=True)
#response = output.split("### Output:")[1].strip()
print(output)
```
## Conclusion
The Vietnamese Llama2-13b with LoRA adapters is a versatile language model that can be utilized for a wide range of NLP tasks in Vietnamese. We hope that researchers and developers find this model useful and are encouraged to experiment with it in their projects.
For any questions, feedback, or contributions, please feel free to contact the maintainers of this repository TranNhiem ๐: [Linkedin](https://www.linkedin.com/in/tran-nhiem-ab1851125/) [Twitter](https://twitter.com/TranRick2) [Facebook](https://www.facebook.com/jean.tran.336), Project [Discord](https://discord.gg/MC3yDZNz). Happy fine-tuning and experimenting with the Llama2-13B model!
|
sontn122/tmp_trainer
|
sontn122
| 2023-09-09T12:20:41Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-09-09T12:17:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
moniem/finetuning-sentiment-model-3000-samples
|
moniem
| 2023-09-09T11:42:08Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-09T11:35:48Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8646864686468646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3239
- Accuracy: 0.8633
- F1: 0.8647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Prot10/vit-base-patch16-224-for-pre_evaluation
|
Prot10
| 2023-09-09T11:30:17Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-29T17:34:40Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-for-pre_evaluation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-for-pre_evaluation
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6048
- Accuracy: 0.3929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5774 | 0.98 | 16 | 1.5109 | 0.3022 |
| 1.4794 | 1.97 | 32 | 1.4942 | 0.3242 |
| 1.4536 | 2.95 | 48 | 1.4943 | 0.3187 |
| 1.421 | 4.0 | 65 | 1.4247 | 0.3407 |
| 1.3882 | 4.98 | 81 | 1.4944 | 0.3462 |
| 1.3579 | 5.97 | 97 | 1.4180 | 0.3571 |
| 1.2838 | 6.95 | 113 | 1.4693 | 0.3681 |
| 1.2695 | 8.0 | 130 | 1.4359 | 0.3434 |
| 1.2016 | 8.98 | 146 | 1.4656 | 0.3599 |
| 1.2087 | 9.97 | 162 | 1.4550 | 0.3379 |
| 1.206 | 10.95 | 178 | 1.5056 | 0.3516 |
| 1.1236 | 12.0 | 195 | 1.5003 | 0.3434 |
| 1.0534 | 12.98 | 211 | 1.5193 | 0.3269 |
| 1.0024 | 13.97 | 227 | 1.4890 | 0.3681 |
| 0.9767 | 14.95 | 243 | 1.5628 | 0.3434 |
| 0.9201 | 16.0 | 260 | 1.6306 | 0.3516 |
| 0.9136 | 16.98 | 276 | 1.5715 | 0.3626 |
| 0.8566 | 17.97 | 292 | 1.5966 | 0.3654 |
| 0.8273 | 18.95 | 308 | 1.6048 | 0.3929 |
| 0.7825 | 20.0 | 325 | 1.6175 | 0.3846 |
| 0.736 | 20.98 | 341 | 1.6526 | 0.3929 |
| 0.7008 | 21.97 | 357 | 1.6563 | 0.3736 |
| 0.6714 | 22.95 | 373 | 1.7319 | 0.3901 |
| 0.7039 | 24.0 | 390 | 1.6866 | 0.3929 |
| 0.628 | 24.98 | 406 | 1.7023 | 0.3791 |
| 0.6182 | 25.97 | 422 | 1.7301 | 0.3901 |
| 0.5957 | 26.95 | 438 | 1.7157 | 0.3846 |
| 0.5973 | 28.0 | 455 | 1.7478 | 0.3709 |
| 0.5655 | 28.98 | 471 | 1.7377 | 0.3736 |
| 0.5631 | 29.54 | 480 | 1.7374 | 0.3736 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
felixshier/ac-01-bert-finetuned
|
felixshier
| 2023-09-09T11:25:10Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-15T23:32:39Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: ac-01-bert-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ac-01-bert-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1172
- Validation Loss: 0.5493
- Train F1: 0.8137
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4030, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.5556 | 0.4472 | 0.7965 | 0 |
| 0.3877 | 0.4268 | 0.8107 | 1 |
| 0.2931 | 0.4459 | 0.8165 | 2 |
| 0.1734 | 0.5071 | 0.8223 | 3 |
| 0.1172 | 0.5493 | 0.8137 | 4 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sd-dreambooth-library/tatar-style
|
sd-dreambooth-library
| 2023-09-09T11:18:50Z | 33 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-09T11:15:48Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### tatar style on Stable Diffusion via Dreambooth
#### model by nailmarsel
This your the Stable Diffusion model fine-tuned the tatar style concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **tatar_style**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:













|
xszhou/CartPole-v1
|
xszhou
| 2023-09-09T11:17:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-09T11:17:17Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 1500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Kyrmasch/mDeBERTa-v3-base-SQuAD2-kaz
|
Kyrmasch
| 2023-09-09T11:08:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-05T05:58:16Z |
Base: timpal0l/mdeberta-v3-base-squad2
|
haouarin/jais-13b-chat-8bits
|
haouarin
| 2023-09-09T10:45:56Z | 6 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jais",
"text-generation",
"custom_code",
"autotrain_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-09-08T12:19:30Z |
Demo google colab : https://colab.research.google.com/drive/13rz5tGDdHc3fTah8qT9rmOKdIg1ylqcD?usp=sharing
|
RVC-RU/glad-valakas-ru
|
RVC-RU
| 2023-09-09T10:38:53Z | 0 | 8 | null |
[
"license:mit",
"region:us"
] | null | 2023-09-09T06:47:31Z |
---
license: mit
---
# ะ ัััะบะพัะทััะฝะฐั ะผะพะดะตะปั ะฝะฐ ัััะธะผะตัะฐ GLAD VALAKAS
###### By nekoanime :)
##### - ะะพะดะตะปั ัะดะตะปะฐะฝะฐ ะฒ 350 ัะฟะพั
. D ะธ G ัะฐะนะปั ััะฐะฝะดะฐััะฝัะต
##### - ะะฐัะฐัะตั ะตััั ะฒ ัะฐะนะปะฐั
, ะผะพะถะฝะพ ัะฒะพะฑะพะดะฝะพ ััะตะฝะธัั ะธ ะดะพะฟะธะปะธะฒะฐัั ะผะพะดะตะปั ะดะพ ะธะดะตะฐะปะฐ ะตัะปะธ ั
ะพัะธัะต.
## ะขะตััั ะผะพะดะตะปะธ (ะะฐั ะฟัะธัััััะฒัะตั)
### ะะธะถะต ัััะปะบะธ ะดะปั ัะบะฐัะธะฒะฐะฝะธั ะฐัะดะธะพ (ะฟััะผัะต)
[ะะฐะฟะธัั ะณะพะปะพัะฐ 1 ะฒ ัะตะฐะปัะฝะพะผ ะฒัะตะผะตะฝะธ](https://cdn.discordapp.com/attachments/650365898678468647/1149966845969969192/valakas_1.mp3)
[ะะฐะฟะธัั ะณะพะปะพัะฐ 2 ะฒ ัะตะฐะปัะฝะพะผ ะฒัะตะผะตะฝะธ](https://cdn.discordapp.com/attachments/650365898678468647/1149966846326493246/valakas_2.mp3)
|
SoyGema/english-hindi
|
SoyGema
| 2023-09-09T10:34:48Z | 156 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"hi",
"dataset:opus100",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-09-02T20:43:14Z |
---
language:
- en
- hi
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: english-hindi
results:
- task:
name: Translation
type: translation
dataset:
name: opus100 en-hi
type: opus100
config: en-hi
split: validation
args: en-hi
metrics:
- name: Bleu
type: bleu
value: 0
pipeline_tag: translation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english-hindi
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus100 en-hi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0653
- Bleu: 0.0
- Gen Len: 97.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
badokorach/bert-base-multilingual-cased-finetuned-luganda-qa
|
badokorach
| 2023-09-09T10:31:09Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-09T09:09:27Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned-luganda-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-luganda-qa
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3748 | 1.0 | 2215 | 0.1817 |
| 0.0707 | 2.0 | 4430 | 0.0123 |
| 0.0141 | 3.0 | 6645 | 0.0007 |
| 0.0045 | 4.0 | 8860 | 0.0002 |
| 0.0005 | 5.0 | 11075 | 0.0000 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
davanstrien/detr_beyond_words
|
davanstrien
| 2023-09-09T10:30:30Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"license:mit",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- object-detection
widget:
- src: https://huggingface.co/davanstrien/detr_beyond_words/resolve/main/19.jpg
example_title: page
- src: https://huggingface.co/davanstrien/detr_beyond_words/resolve/main/65.jpg
example_title: page2
---
# detr_beyond_words (WIP)
[facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) fine tuned on [Beyond Words](https://github.com/LibraryOfCongress/newspaper-navigator/tree/master/beyond_words_data).
|
camenduru/ffmpeg-cuda
|
camenduru
| 2023-09-09T10:17:18Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-09-09T10:16:55Z |
FFmpeg README
=============
FFmpeg is a collection of libraries and tools to process multimedia content
such as audio, video, subtitles and related metadata.
## Libraries
* `libavcodec` provides implementation of a wider range of codecs.
* `libavformat` implements streaming protocols, container formats and basic I/O access.
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
* `libavfilter` provides means to alter decoded audio and video through a directed graph of connected filters.
* `libavdevice` provides an abstraction to access capture and playback devices.
* `libswresample` implements audio mixing and resampling routines.
* `libswscale` implements color conversion and scaling routines.
## Tools
* [ffmpeg](https://ffmpeg.org/ffmpeg.html) is a command line toolbox to
manipulate, convert and stream multimedia content.
* [ffplay](https://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
* [ffprobe](https://ffmpeg.org/ffprobe.html) is a simple analysis tool to inspect
multimedia content.
* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
## Documentation
The offline documentation is available in the **doc/** directory.
The online documentation is available in the main [website](https://ffmpeg.org)
and in the [wiki](https://trac.ffmpeg.org).
### Examples
Coding examples are available in the **doc/examples** directory.
## License
FFmpeg codebase is mainly LGPL-licensed with optional components licensed under
GPL. Please refer to the LICENSE file for detailed information.
## Contributing
Patches should be submitted to the ffmpeg-devel mailing list using
`git format-patch` or `git send-email`. Github pull requests should be
avoided because they are not part of our review process and will be ignored.
|
antikpatel128/OUTPUT_DIR
|
antikpatel128
| 2023-09-09T09:54:33Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-08T14:21:44Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text:
---
# Slider SDXL - LoRA

<h2 id="heading-2">SDXL ONLY</h2><ul><li><p>weight: <strong>0 to 5.0</strong></p></li><li><p>positive: <strong>more realistic</strong></p></li><li><p>negative: <strong>less realistic, cartoon, painting, etc</strong></p></li></ul><p></p><p>I noticed the more bizarre your prompt gets, the more SDXL wants to turn it into a cartoon. This helps give you the ability to adjust the level of realism in a photo. All images were generated without refiner. I refuse. </p><p></p><p>If you like my work, I am not asking for coffee, but a kind review is always appreciated.<br /><br /></p>
## Image examples for the model:




|
hwkang/distilbert-base-uncased-finetuned-emotion
|
hwkang
| 2023-09-09T09:42:18Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-09T07:25:55Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9263847378294227
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2151
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.815 | 1.0 | 250 | 0.3069 | 0.915 | 0.9144 |
| 0.2449 | 2.0 | 500 | 0.2151 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
rrozb/LunarLanderPPO
|
rrozb
| 2023-09-09T09:28:00Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-09T09:27:53Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -196.83 +/- 90.71
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'rrozb/LunarLanderPPO'
'batch_size': 512
'minibatch_size': 128}
```
|
Bhuvaneshwari/worktual_vectone_cai
|
Bhuvaneshwari
| 2023-09-09T09:27:48Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-09T09:13:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
ninachely/my-ruDialoGPT-medium-model
|
ninachely
| 2023-09-09T08:58:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:t-bank-ai/ruDialoGPT-medium",
"base_model:finetune:t-bank-ai/ruDialoGPT-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-09T08:13:49Z |
---
license: mit
base_model: tinkoff-ai/ruDialoGPT-medium
tags:
- generated_from_trainer
model-index:
- name: my-ruDialoGPT-medium-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-ruDialoGPT-medium-model
This model is a fine-tuned version of [tinkoff-ai/ruDialoGPT-medium](https://huggingface.co/tinkoff-ai/ruDialoGPT-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.5214 | 1.0 | 3488 | 1.4633 |
| 1.399 | 2.0 | 6976 | 1.3927 |
| 1.3553 | 3.0 | 10464 | 1.3729 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
Sunny98/q-FrozenLake-v1-4x4-noSlippery
|
Sunny98
| 2023-09-09T08:51:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-09T08:51:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Sunny98/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KobanBanan/ruRoberta-large_ner
|
KobanBanan
| 2023-09-09T08:41:56Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:ai-forever/ruRoberta-large",
"base_model:finetune:ai-forever/ruRoberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-08T14:34:59Z |
---
base_model: ai-forever/ruRoberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ruRoberta-large_ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruRoberta-large_ner
This model is a fine-tuned version of [ai-forever/ruRoberta-large](https://huggingface.co/ai-forever/ruRoberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1853
- Precision: 0.7273
- Recall: 0.8
- F1: 0.7619
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.4171 | 0.5833 | 0.7 | 0.6364 | 0.8067 |
| No log | 2.0 | 30 | 0.2306 | 0.6765 | 0.7667 | 0.7188 | 0.9 |
| No log | 3.0 | 45 | 0.1853 | 0.7273 | 0.8 | 0.7619 | 0.9333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0.dev20230621+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Venkatesh4342/pegasus-samsum
|
Venkatesh4342
| 2023-09-09T07:39:35Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-07T14:22:01Z |
---
base_model: google/pegasus-large
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: pegasus-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 0.4659
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4091
- Rouge1: 0.4659
- Rouge2: 0.2345
- Rougel: 0.3946
- Rougelsum: 0.3951
- Gen Len: 17.7467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.8025 | 0.27 | 500 | 1.4403 | 0.4466 | 0.2101 | 0.3832 | 0.3841 | 21.64 |
| 1.5936 | 0.54 | 1000 | 1.3766 | 0.4786 | 0.2374 | 0.4017 | 0.4013 | 21.24 |
| 1.5926 | 0.81 | 1500 | 1.3910 | 0.5118 | 0.2643 | 0.4282 | 0.4286 | 20.2267 |
| 1.5067 | 1.09 | 2000 | 1.4028 | 0.4982 | 0.261 | 0.4155 | 0.4157 | 20.4267 |
| 1.5712 | 1.36 | 2500 | 1.4236 | 0.4712 | 0.234 | 0.3964 | 0.3969 | 17.0 |
| 1.6177 | 1.63 | 3000 | 1.4151 | 0.4768 | 0.2382 | 0.4019 | 0.4022 | 16.28 |
| 1.6289 | 1.9 | 3500 | 1.4112 | 0.4744 | 0.2346 | 0.402 | 0.4033 | 17.0267 |
| 1.6326 | 2.17 | 4000 | 1.4096 | 0.4682 | 0.234 | 0.3985 | 0.3994 | 17.1333 |
| 1.5929 | 2.44 | 4500 | 1.4093 | 0.4637 | 0.2342 | 0.3939 | 0.3942 | 17.16 |
| 1.4351 | 2.72 | 5000 | 1.4090 | 0.4684 | 0.2346 | 0.3953 | 0.3955 | 17.8133 |
| 1.6445 | 2.99 | 5500 | 1.4091 | 0.4659 | 0.2345 | 0.3946 | 0.3951 | 17.7467 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
922-CA/natsuki-lm-lora-tests
|
922-CA
| 2023-09-09T07:14:46Z | 0 | 0 | null |
[
"license:llama2",
"region:us"
] | null | 2023-09-07T08:39:45Z |
---
license: llama2
---
For better/best results, use "Player" and "Natsuki" like so:
\nPlayer: (prompt)\Natsuki:
# l2-7b-natsuki-v0.1 (09/07/2023)
* Fine-tuned on Natsuki dialogue from DDLC (dataset of ~800 items augmented by [MythoMax-l2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b) to turn into multi-turn chat dialogue)
* From chat LLaMA-2-7b
* Lora of [l2-7b-natsuki-ddlc-v0.1](https://huggingface.co/922-CA/l2-7b-natsuki-ddlc-v0.1)
# l2-7b-natsuki-v0.1-Kv2 (09/08/2023)
* Fine-tuned on Natsuki dialogue from DDLC (dataset of ~800 items augmented by [MythoMax-l2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b) to turn into multi-turn chat dialogue)
* From [Kimiko-LLaMA-2-7b](https://huggingface.co/johnwick123forevr/Llama2-chat-kimiko-Sharded-2gb)
* Lora of [l2-7b-natsuki-ddlc-v0.1-Kv2](https://huggingface.co/922-CA/l2-7b-natsuki-ddlc-v0.1-Kv2)
|
922-CA/sayori-lm-lora-tests
|
922-CA
| 2023-09-09T07:13:10Z | 0 | 0 | null |
[
"license:llama2",
"region:us"
] | null | 2023-09-07T08:40:14Z |
---
license: llama2
---
For better/best results, use "Player" and "Sayori" like so:
\nPlayer: (prompt)\Sayori:
# l2-7b-sayori-v0.1 (09/07/2023)
* Fine-tuned on Sayori dialogue from DDLC (dataset of ~600 items augmented by [MythoMax-l2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b) to turn into multi-turn chat dialogue)
* From chat LLaMA-2-7b
* Lora of [l2-7b-sayori-ddlc-v0.1](https://huggingface.co/922-CA/l2-7b-sayori-ddlc-v0.1)
# l2-7b-sayori-v0.1-Kv2 (09/08/2023)
* Fine-tuned on Sayori dialogue from DDLC (dataset of ~600 items augmented by [MythoMax-l2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b) to turn into multi-turn chat dialogue)
* From [Kimiko-LLaMA-2-7b](https://huggingface.co/johnwick123forevr/Llama2-chat-kimiko-Sharded-2gb)
* Lora of [l2-7b-sayori-ddlc-v0.1-Kv2](https://huggingface.co/922-CA/l2-7b-sayori-ddlc-v0.1-Kv2)
|
FredNajjar/my_awesome_qa_model
|
FredNajjar
| 2023-09-09T07:12:36Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-09T02:17:32Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.2079 |
| 2.4745 | 2.0 | 500 | 1.6112 |
| 2.4745 | 3.0 | 750 | 1.5901 |
| 0.9178 | 4.0 | 1000 | 1.6356 |
| 0.9178 | 5.0 | 1250 | 1.6687 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
osieosie/bloom-mnli-4bit-7b-bnb-seed87
|
osieosie
| 2023-09-09T07:10:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-09T07:10:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
922-CA/l2-7b-yuri-ddlc-v0.1
|
922-CA
| 2023-09-09T07:07:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-07T09:03:31Z |
---
license: llama2
---
# l2-7b-yuri-ddlc-v0.1:
* Experimental LLaMA-2 7b chat fine-tuned for Yuri character from DDLC
* Fine-tuned on a dataset of ~1300 items (dialogue scraped from game augmented by [MythoMax-l2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b) to turn each into snippets of multi-turn chat dialogue between Player and Yuri)
* [GGMLs](https://huggingface.co/922-CA/l2-7b-yuri-ddlc-v0.1-ggml), [GGUFs](https://huggingface.co/922-CA/l2-7b-yuri-ddlc-v0.1-gguf)
* [QLoras (hf and GGML)](https://huggingface.co/922-CA/yuri-lm-lora-tests/tree/main/l2-7b-yuri-v0.1)
### USAGE
This is meant to be mainly a chat model with limited RP ability.
For best results: replace "Human" and "Assistant" with "Player" and "Yuri" like so:
\nPlayer: (prompt)\Yuri:
### HYPERPARAMS
* Trained for 2 epochs
* rank: 32
* lora alpha: 64
* lora dropout: 0.5
* lr: 2e-4
* batch size: 2
* warmup ratio: 0.1
* grad steps: 4
### WARNINGS AND DISCLAIMERS
Note that aside from formatting and other minor edits, generated portion of dataset used is mostly as is generated by LM. As such, while this version is better at coherency or chatting than previous ones, it may not reflect perfectly the characteristics of Yuri (i.e. she may be not as timid, have different preferences, etc.). Next version will train on a manually curated and edited version of this dataset, where dialogue will be edited to reflect her characteristics more.
Other tests to come (i.e. fine tuning on other base models, like Airoboros or Kimiko-based model).
Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk.
|
asyafiqe/Merak-7B-v3-Mini-Orca-Indo
|
asyafiqe
| 2023-09-09T07:00:02Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"id",
"dataset:asyafiqe/orca_mini_v1_indonesia",
"arxiv:2307.09288",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-08-26T08:36:51Z |
---
inference: false
license: cc-by-nc-sa-4.0
datasets:
- asyafiqe/orca_mini_v1_indonesia
language:
- en
- id
---
# ๐ฆMerak-7B-v3-Mini-Orca๐ณ
<p align="center">
<img src="https://i.imgur.com/39sQd3h.png" alt="Merak Orca" width="300" height="300"/>
</p>
**Merak-7B-v3-Mini-Orca** is Ichsan2895's [Merak-7B-v3](https://huggingface.co/Ichsan2895/Merak-7B-v3) fine-tuned
on Bahasa Indonesia translated psmathur's [orca_mini_v1_dataset](https://huggingface.co/datasets/psmathur/orca_mini_v1_dataset).
## Usage
This model fit on 16GB VRAM GPU (Google Collab T4 wil do), by using BitsandBytes it can run on 6GB VRAM GPU.
[](https://colab.research.google.com/drive/11xmPcRNirGwZcpgmNPNpUioJUG4PQBuh)
**Quantized** versions is available:
GPTQ: https://huggingface.co/asyafiqe/Merak-7B-v3-Mini-Orca-Indo-GPTQ
GGML/GGUF: I will try to make this version once GGUF merge is stable.
Start chatting with Merak Mini Orca using the following code snippet:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo")
model = AutoModelForCausalLM.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo", torch_dtype=torch.float16, device_map="auto")
system_prompt = "SYSTEM: 'Anda adalah asisten AI. Anda akan diberi tugas. Anda harus menghasilkan jawaban yang rinci dan panjang.\n"
message = "Buatlah rencana untuk mengurangi penggunaan listrik di rumah."
prompt = f"{system_prompt}USER: {message}\nASSISTANT:"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.1, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### Prompt format
You can use [Vicuna 1.1](https://github.com/oobabooga/text-generation-webui/blob/main/instruction-templates/Vicuna-v1.1.yaml)
format for Ooobabooga's text generation webui.
```
SYSTEM: Anda adalah asisten AI. Anda akan diberi tugas. Anda harus memberikan jawaban yang rinci dan panjang.
USER: <prompt> (without the <>)
ASSISTANT:
```
## Training details
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Merak-7B-v3-Mini-Orca was instruction fine-tuned on 2 x 3090-24GB for 6 hours. [LoRA](https://github.com/microsoft/LoRA), [DeepSpeed ZeRO-2](https://github.com/microsoft/DeepSpeed), and [FlashAttention](https://github.com/Dao-AILab/flash-attention) were implemented during training using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
Hyperparameter | value |
| ------ | ------ |
learning rate | 0.0004 |
batch size | 16 |
microbatch size | 2 |
warmup step | 100 |
epochs | 2 |
weight decay | 0.0 |
lr scheduler | cosine |
lora alpha | 16 |
lora rank | 16 |
lora dropout | 0.05 |
lora target modules | q_proj, v_proj, k_proj, o_proj |
cutoff length | 4096 |
#### Training loss
Step |Train Loss |
| ------ | ------ |
1 |0.9578 |
100 |0.816 |
200 |0.7819 |
300 |0.7279 |
400 |0.732 |
500 |0.7139 |
600 |0.6829 |
700 |0.6641 |
800 |0.6553 |
#### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## Citation
```
@Paper{arXiv,
author = {Touvron, et al},
title = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
journal = {arXiv preprint arXiv:2307.09288},
year = {2023}
}
@misc{orca_mini_v3_70b,
author = {Pankaj Mathur},
title = {orca_mini_v3_70b: An Orca Style Llama2-70b model},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_70b},
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
journal={CoRR},
year={2021}
}
```
|
Dorgus/horse_model
|
Dorgus
| 2023-09-09T06:50:17Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stablediffusionapi/bb95-furry-mix",
"base_model:finetune:stablediffusionapi/bb95-furry-mix",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-09T03:44:22Z |
---
license: creativeml-openrail-m
base_model: stablediffusionapi/bb95-furry-mix
instance_prompt: handsome sks anthro horse with black and white fur
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Dorgus/horse_model
This is a dreambooth model derived from stablediffusionapi/bb95-furry-mix. The weights were trained on handsome sks anthro horse with black and white fur using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
re2panda/polyglot_12b_grade_school_math
|
re2panda
| 2023-09-09T06:48:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-09T06:47:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
sk0032/coqui-tts-model-adam
|
sk0032
| 2023-09-09T06:43:08Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"endpoints_compatible",
"region:us"
] | null | 2023-09-08T12:29:19Z |
Epochs- 11,276
GLOBAL_STEP: 1248150
|
shenshan/chinese-alpaca-2-gguf
|
shenshan
| 2023-09-09T06:42:50Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-08T08:36:30Z |
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- text-generation-inference
---
# Chinese-Alpaca-2 7B & 13B
Quantized by [llama.cpp](https://github.com/ggerganov/llama.cpp)
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for details.
|
922-CA/l2-7b-sayori-ddlc-v0.1-gguf
|
922-CA
| 2023-09-09T06:28:12Z | 1 | 0 | null |
[
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-09-08T11:01:16Z |
---
license: llama2
---
GGUFs of [l2-7b-sayori-ddlc-v0.1](https://huggingface.co/922-CA/l2-7b-sayori-ddlc-v0.1). (Primarily tested and run with Koboldcpp v1.41+).
QLora (hf and GGML) [here](https://huggingface.co/922-CA/sayori-lm-lora-tests/tree/main/l2-7b-sayori-v0.1).
|
razhan/bart-kurd-spell-base-05_10
|
razhan
| 2023-09-09T06:20:40Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-08T17:49:17Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: bart-kurd-spell-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-kurd-spell-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1930
- Cer: 1.5424
- Wer: 8.3088
- Gen Len: 12.6945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:-------:|:-------:|
| 0.4548 | 1.54 | 20000 | 0.4117 | 2.8856 | 13.6181 | 12.7807 |
| 0.2723 | 3.07 | 40000 | 0.2736 | 2.1004 | 10.5883 | 12.6808 |
| 0.2246 | 4.61 | 60000 | 0.2303 | 1.8035 | 9.4897 | 12.7048 |
| 0.1812 | 6.14 | 80000 | 0.2122 | 1.6804 | 8.9349 | 12.6937 |
| 0.1693 | 7.68 | 100000 | 0.2001 | 1.589 | 8.5464 | 12.7045 |
| 0.1498 | 9.22 | 120000 | 0.1942 | 1.5546 | 8.3598 | 12.6935 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 1.13.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
masonbarnes/open-llm-search
|
masonbarnes
| 2023-09-09T06:00:09Z | 56 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-04T21:55:24Z |
---
license: llama2
language:
- en
---
# **Model Overview**
As the demand for large language models grows, a common limitation surfaces: their inability to directly search the internet. Although tech giants like Google (with Bard), Bing, and Perplexity are addressing this challenge, their proprietary methods have data logging issues.
**Introducing Open LLM Search** โ A specialized adaptation of Together AI's `llama-2-7b-32k` model, purpose-built for extracting information from web pages. While the model only has a 7 billion parameters, its fine-tuned capabilities and expanded context limit enable it to excel in search tasks.
**License:** This model uses Meta's Llama 2 license.
# **Fine-Tuning Process**
The model's fine tuning involved a combination of GPT-4 and GPT-4-32k to generate synthetic data. Here is the training workflow used:
1. Use GPT-4 to generate a multitude of queries.
2. For each query, identify the top five website results from Google.
3. Extract content from these websites and use GPT-4-32k for their summarization.
4. Record the text and summarizes from GPT-4-32k for fine-tuning.
5. Feed the summaries from all five sources with GPT-4 to craft a cohesive response.
6. Document both the input and output from GPT-4 for fine-tuning.
Fine tuning was done with an `<instructions>:`, `<user>:`, and `<assistant>:` format.
# **Getting Started**
- Experience it firsthand! Check out the live demo [here](https://huggingface.co/spaces/masonbarnes/open-llm-search).
- For DIY enthusiasts, explore or self-deploy this solution using our [GitHub repository](https://github.com/MasonBarnes/open-llm-search).
|
trieudemo11/llama_7b_attrb_cate_10m_0
|
trieudemo11
| 2023-09-09T06:00:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-09T05:59:52Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
tmnam20/codellama_instruct_pt_text2sql
|
tmnam20
| 2023-09-09T05:45:58Z | 0 | 0 | null |
[
"generated_from_trainer",
"dataset:tmnam20/InstructNSText2SQL",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2023-09-06T02:59:32Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- generated_from_trainer
datasets:
- tmnam20/InstructNSText2SQL
model-index:
- name: codellama_instruct_pt_text2sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellama_instruct_pt_text2sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the tmnam20/InstructNSText2SQL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0693 | 0.22 | 2000 | 0.0589 |
| 0.047 | 0.45 | 4000 | 0.0396 |
| 0.0364 | 0.67 | 6000 | 0.0307 |
| 0.0311 | 0.89 | 8000 | 0.0278 |
| 0.0251 | 1.11 | 10000 | 0.0241 |
| 0.0243 | 1.34 | 12000 | 0.0228 |
| 0.0227 | 1.56 | 14000 | 0.0223 |
| 0.0212 | 1.78 | 16000 | 0.0201 |
| 0.0202 | 2.01 | 18000 | 0.0182 |
| 0.016 | 2.23 | 20000 | 0.0184 |
| 0.0156 | 2.45 | 22000 | 0.0179 |
| 0.015 | 2.67 | 24000 | 0.0173 |
| 0.0147 | 2.9 | 26000 | 0.0165 |
| 0.0112 | 3.12 | 28000 | 0.0165 |
| 0.0109 | 3.34 | 30000 | 0.0161 |
| 0.0109 | 3.56 | 32000 | 0.0155 |
| 0.0105 | 3.79 | 34000 | 0.0152 |
| 0.0104 | 4.01 | 36000 | 0.0150 |
| 0.0077 | 4.23 | 38000 | 0.0158 |
| 0.0078 | 4.46 | 40000 | 0.0151 |
| 0.0076 | 4.68 | 42000 | 0.0150 |
| 0.0077 | 4.9 | 44000 | 0.0150 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dsmsb/esg-tweet-bert_0909_testing_v1
|
dsmsb
| 2023-09-09T05:44:15Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-09T02:38:31Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: esg-tweet-bert_0909_testing_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esg-tweet-bert_0909_testing_v1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 246 | 0.0440 | 0.9887 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bkowshik/swag-multiple-choice
|
bkowshik
| 2023-09-09T05:32:12Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-09-08T12:48:11Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: swag-multiple-choice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swag-multiple-choice
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0120
- Accuracy: 0.7052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 157 | 0.8148 | 0.6848 |
| No log | 2.0 | 314 | 0.8738 | 0.702 |
| No log | 3.0 | 471 | 1.0120 | 0.7052 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Onutoa/1_8e-3_10_0.5
|
Onutoa
| 2023-09-09T04:49:16Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-09T01:48:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_8e-3_10_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_8e-3_10_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9754
- Accuracy: 0.7459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.008
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0295 | 1.0 | 590 | 5.2308 | 0.6217 |
| 3.1648 | 2.0 | 1180 | 2.6673 | 0.3908 |
| 2.5921 | 3.0 | 1770 | 5.0497 | 0.3761 |
| 2.9042 | 4.0 | 2360 | 2.2586 | 0.6291 |
| 2.4411 | 5.0 | 2950 | 6.5105 | 0.6217 |
| 2.3131 | 6.0 | 3540 | 2.7244 | 0.5183 |
| 2.0563 | 7.0 | 4130 | 4.6938 | 0.3783 |
| 1.9468 | 8.0 | 4720 | 1.5045 | 0.6862 |
| 1.9269 | 9.0 | 5310 | 1.7666 | 0.6734 |
| 1.9701 | 10.0 | 5900 | 1.8173 | 0.6780 |
| 1.8231 | 11.0 | 6490 | 1.6929 | 0.6752 |
| 1.7563 | 12.0 | 7080 | 1.3455 | 0.6862 |
| 1.726 | 13.0 | 7670 | 1.2870 | 0.6786 |
| 1.6706 | 14.0 | 8260 | 1.3862 | 0.6951 |
| 1.5876 | 15.0 | 8850 | 1.4384 | 0.6587 |
| 1.5067 | 16.0 | 9440 | 1.5336 | 0.6985 |
| 1.5777 | 17.0 | 10030 | 1.9860 | 0.5972 |
| 1.4323 | 18.0 | 10620 | 1.2068 | 0.7076 |
| 1.4228 | 19.0 | 11210 | 1.8071 | 0.6780 |
| 1.4335 | 20.0 | 11800 | 4.1127 | 0.6346 |
| 1.4549 | 21.0 | 12390 | 1.2302 | 0.7131 |
| 1.277 | 22.0 | 12980 | 1.2829 | 0.6771 |
| 1.2962 | 23.0 | 13570 | 1.2152 | 0.7070 |
| 1.4076 | 24.0 | 14160 | 1.5758 | 0.6529 |
| 1.3427 | 25.0 | 14750 | 1.1333 | 0.6997 |
| 1.1936 | 26.0 | 15340 | 1.1974 | 0.6917 |
| 1.1937 | 27.0 | 15930 | 1.2653 | 0.6948 |
| 1.2784 | 28.0 | 16520 | 1.0620 | 0.7242 |
| 1.1605 | 29.0 | 17110 | 2.7859 | 0.6734 |
| 1.1438 | 30.0 | 17700 | 1.8633 | 0.6428 |
| 1.1406 | 31.0 | 18290 | 1.6275 | 0.7098 |
| 1.0993 | 32.0 | 18880 | 1.2765 | 0.6969 |
| 1.158 | 33.0 | 19470 | 1.1218 | 0.7058 |
| 1.0432 | 34.0 | 20060 | 1.0562 | 0.7245 |
| 1.0295 | 35.0 | 20650 | 1.3146 | 0.7251 |
| 1.0041 | 36.0 | 21240 | 1.0308 | 0.7150 |
| 1.0104 | 37.0 | 21830 | 1.0149 | 0.7242 |
| 1.0096 | 38.0 | 22420 | 1.1232 | 0.7083 |
| 0.9661 | 39.0 | 23010 | 1.0316 | 0.7251 |
| 0.9183 | 40.0 | 23600 | 1.2166 | 0.7055 |
| 0.9298 | 41.0 | 24190 | 1.9118 | 0.7040 |
| 0.8799 | 42.0 | 24780 | 1.0190 | 0.7306 |
| 0.954 | 43.0 | 25370 | 1.0761 | 0.7263 |
| 0.853 | 44.0 | 25960 | 1.2006 | 0.7080 |
| 1.0647 | 45.0 | 26550 | 1.1605 | 0.7379 |
| 0.8562 | 46.0 | 27140 | 1.2208 | 0.7122 |
| 0.8421 | 47.0 | 27730 | 0.9974 | 0.7388 |
| 0.7865 | 48.0 | 28320 | 1.1207 | 0.7376 |
| 0.8998 | 49.0 | 28910 | 1.1221 | 0.7080 |
| 0.8044 | 50.0 | 29500 | 1.0191 | 0.7205 |
| 0.7771 | 51.0 | 30090 | 0.9921 | 0.7364 |
| 0.7886 | 52.0 | 30680 | 1.1379 | 0.7419 |
| 0.7756 | 53.0 | 31270 | 1.3039 | 0.7315 |
| 0.7232 | 54.0 | 31860 | 1.1143 | 0.7385 |
| 0.69 | 55.0 | 32450 | 1.1024 | 0.7239 |
| 0.7313 | 56.0 | 33040 | 1.3560 | 0.7370 |
| 0.7266 | 57.0 | 33630 | 0.9763 | 0.7431 |
| 0.7084 | 58.0 | 34220 | 1.4480 | 0.7291 |
| 0.7072 | 59.0 | 34810 | 1.4463 | 0.7336 |
| 0.6889 | 60.0 | 35400 | 1.2983 | 0.7330 |
| 0.6745 | 61.0 | 35990 | 0.9898 | 0.7413 |
| 0.6739 | 62.0 | 36580 | 0.9817 | 0.7373 |
| 0.6513 | 63.0 | 37170 | 0.9999 | 0.7391 |
| 0.6665 | 64.0 | 37760 | 0.9840 | 0.7367 |
| 0.6428 | 65.0 | 38350 | 1.0120 | 0.7284 |
| 0.6418 | 66.0 | 38940 | 1.0021 | 0.7401 |
| 0.6185 | 67.0 | 39530 | 1.0063 | 0.7327 |
| 0.6259 | 68.0 | 40120 | 1.0108 | 0.7339 |
| 0.6165 | 69.0 | 40710 | 1.0279 | 0.7440 |
| 0.6393 | 70.0 | 41300 | 1.1899 | 0.7183 |
| 0.5869 | 71.0 | 41890 | 0.9767 | 0.7333 |
| 0.605 | 72.0 | 42480 | 1.4097 | 0.7367 |
| 0.5906 | 73.0 | 43070 | 1.0036 | 0.7358 |
| 0.5704 | 74.0 | 43660 | 1.3105 | 0.7443 |
| 0.5872 | 75.0 | 44250 | 1.0241 | 0.7242 |
| 0.5755 | 76.0 | 44840 | 1.1519 | 0.7410 |
| 0.5967 | 77.0 | 45430 | 1.1481 | 0.7431 |
| 0.57 | 78.0 | 46020 | 1.0164 | 0.7398 |
| 0.5599 | 79.0 | 46610 | 1.1657 | 0.7391 |
| 0.5458 | 80.0 | 47200 | 1.1020 | 0.7422 |
| 0.5299 | 81.0 | 47790 | 1.0836 | 0.7437 |
| 0.5285 | 82.0 | 48380 | 0.9682 | 0.7391 |
| 0.538 | 83.0 | 48970 | 1.1895 | 0.7193 |
| 0.5277 | 84.0 | 49560 | 0.9778 | 0.7459 |
| 0.525 | 85.0 | 50150 | 0.9893 | 0.7364 |
| 0.5268 | 86.0 | 50740 | 0.9745 | 0.7434 |
| 0.518 | 87.0 | 51330 | 0.9654 | 0.7450 |
| 0.5212 | 88.0 | 51920 | 0.9665 | 0.7382 |
| 0.5132 | 89.0 | 52510 | 1.0605 | 0.7474 |
| 0.5155 | 90.0 | 53100 | 0.9605 | 0.7440 |
| 0.4986 | 91.0 | 53690 | 1.0163 | 0.7480 |
| 0.5004 | 92.0 | 54280 | 1.0187 | 0.7312 |
| 0.4846 | 93.0 | 54870 | 0.9721 | 0.7440 |
| 0.4963 | 94.0 | 55460 | 1.0295 | 0.7468 |
| 0.4759 | 95.0 | 56050 | 1.0004 | 0.7468 |
| 0.4905 | 96.0 | 56640 | 1.0361 | 0.7474 |
| 0.4994 | 97.0 | 57230 | 0.9591 | 0.7446 |
| 0.4673 | 98.0 | 57820 | 0.9604 | 0.7431 |
| 0.4734 | 99.0 | 58410 | 0.9771 | 0.7462 |
| 0.4588 | 100.0 | 59000 | 0.9754 | 0.7459 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
lampshade/Al-Jarreau
|
lampshade
| 2023-09-09T04:45:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-08T02:08:40Z |
---
license: creativeml-openrail-m
---
|
Onutoa/1_6e-3_10_0.5
|
Onutoa
| 2023-09-09T04:29:22Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-09T01:30:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_6e-3_10_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_6e-3_10_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9536
- Accuracy: 0.7596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.006
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.948 | 1.0 | 590 | 2.2396 | 0.6214 |
| 2.5635 | 2.0 | 1180 | 2.2693 | 0.6275 |
| 2.5246 | 3.0 | 1770 | 1.9556 | 0.6141 |
| 2.329 | 4.0 | 2360 | 2.3951 | 0.4801 |
| 2.1726 | 5.0 | 2950 | 1.7234 | 0.6618 |
| 2.0265 | 6.0 | 3540 | 1.5347 | 0.6679 |
| 2.0227 | 7.0 | 4130 | 1.8508 | 0.6064 |
| 1.8725 | 8.0 | 4720 | 2.0863 | 0.6584 |
| 1.8575 | 9.0 | 5310 | 4.0052 | 0.4639 |
| 1.8071 | 10.0 | 5900 | 3.1552 | 0.6468 |
| 1.6655 | 11.0 | 6490 | 1.3147 | 0.7104 |
| 1.501 | 12.0 | 7080 | 1.3005 | 0.6844 |
| 1.538 | 13.0 | 7670 | 1.7051 | 0.6948 |
| 1.4114 | 14.0 | 8260 | 1.4922 | 0.7028 |
| 1.3916 | 15.0 | 8850 | 1.6514 | 0.7034 |
| 1.3373 | 16.0 | 9440 | 1.9420 | 0.5896 |
| 1.271 | 17.0 | 10030 | 2.9731 | 0.6624 |
| 1.3123 | 18.0 | 10620 | 1.4756 | 0.6609 |
| 1.2775 | 19.0 | 11210 | 1.4888 | 0.6612 |
| 1.2341 | 20.0 | 11800 | 1.4493 | 0.7159 |
| 1.1907 | 21.0 | 12390 | 1.7638 | 0.7110 |
| 1.2035 | 22.0 | 12980 | 1.0716 | 0.7291 |
| 1.0365 | 23.0 | 13570 | 1.2975 | 0.6853 |
| 1.1041 | 24.0 | 14160 | 1.0275 | 0.7220 |
| 1.1326 | 25.0 | 14750 | 1.0228 | 0.7385 |
| 1.0261 | 26.0 | 15340 | 1.1473 | 0.7076 |
| 1.0168 | 27.0 | 15930 | 1.0435 | 0.7205 |
| 1.0653 | 28.0 | 16520 | 1.0105 | 0.7358 |
| 0.9418 | 29.0 | 17110 | 1.0397 | 0.7232 |
| 1.0591 | 30.0 | 17700 | 1.3640 | 0.6917 |
| 0.9186 | 31.0 | 18290 | 0.9679 | 0.7459 |
| 0.8665 | 32.0 | 18880 | 1.0310 | 0.7303 |
| 0.9005 | 33.0 | 19470 | 1.0498 | 0.7235 |
| 0.8494 | 34.0 | 20060 | 0.9766 | 0.7358 |
| 0.8474 | 35.0 | 20650 | 1.0077 | 0.7465 |
| 0.7973 | 36.0 | 21240 | 1.0674 | 0.7428 |
| 0.8049 | 37.0 | 21830 | 1.0074 | 0.7398 |
| 0.8241 | 38.0 | 22420 | 0.9613 | 0.7453 |
| 0.7793 | 39.0 | 23010 | 0.9864 | 0.7398 |
| 0.7781 | 40.0 | 23600 | 1.0741 | 0.7456 |
| 0.7539 | 41.0 | 24190 | 0.9809 | 0.7550 |
| 0.7403 | 42.0 | 24780 | 0.9993 | 0.7339 |
| 0.7494 | 43.0 | 25370 | 0.9887 | 0.7477 |
| 0.7091 | 44.0 | 25960 | 1.1792 | 0.7125 |
| 0.7236 | 45.0 | 26550 | 0.9549 | 0.7443 |
| 0.6947 | 46.0 | 27140 | 1.3568 | 0.7440 |
| 0.6928 | 47.0 | 27730 | 1.0682 | 0.7517 |
| 0.6578 | 48.0 | 28320 | 1.0993 | 0.7486 |
| 0.7723 | 49.0 | 28910 | 1.0381 | 0.7260 |
| 0.7169 | 50.0 | 29500 | 0.9510 | 0.7486 |
| 0.6424 | 51.0 | 30090 | 1.0781 | 0.7281 |
| 0.6652 | 52.0 | 30680 | 0.9623 | 0.7541 |
| 0.6274 | 53.0 | 31270 | 0.9476 | 0.7498 |
| 0.6295 | 54.0 | 31860 | 0.9461 | 0.7474 |
| 0.6252 | 55.0 | 32450 | 1.0873 | 0.7278 |
| 0.632 | 56.0 | 33040 | 0.9470 | 0.7492 |
| 0.5865 | 57.0 | 33630 | 1.4737 | 0.7355 |
| 0.6029 | 58.0 | 34220 | 1.0871 | 0.7477 |
| 0.5935 | 59.0 | 34810 | 1.0781 | 0.7514 |
| 0.6023 | 60.0 | 35400 | 0.9968 | 0.7581 |
| 0.5849 | 61.0 | 35990 | 1.0700 | 0.7547 |
| 0.5813 | 62.0 | 36580 | 1.2525 | 0.7425 |
| 0.5557 | 63.0 | 37170 | 0.9643 | 0.7541 |
| 0.541 | 64.0 | 37760 | 1.0179 | 0.7547 |
| 0.5693 | 65.0 | 38350 | 1.0064 | 0.7401 |
| 0.5562 | 66.0 | 38940 | 1.2333 | 0.7367 |
| 0.5677 | 67.0 | 39530 | 0.9976 | 0.7388 |
| 0.5357 | 68.0 | 40120 | 0.9795 | 0.7413 |
| 0.5372 | 69.0 | 40710 | 1.1113 | 0.7462 |
| 0.5563 | 70.0 | 41300 | 1.1366 | 0.7492 |
| 0.5377 | 71.0 | 41890 | 0.9343 | 0.7502 |
| 0.5442 | 72.0 | 42480 | 1.1735 | 0.7465 |
| 0.5124 | 73.0 | 43070 | 0.9499 | 0.7514 |
| 0.5007 | 74.0 | 43660 | 1.2104 | 0.7456 |
| 0.5094 | 75.0 | 44250 | 0.9865 | 0.7474 |
| 0.5118 | 76.0 | 44840 | 1.0542 | 0.7474 |
| 0.5166 | 77.0 | 45430 | 0.9762 | 0.7615 |
| 0.5071 | 78.0 | 46020 | 0.9333 | 0.7581 |
| 0.4961 | 79.0 | 46610 | 1.0310 | 0.7535 |
| 0.4863 | 80.0 | 47200 | 1.0242 | 0.7492 |
| 0.4801 | 81.0 | 47790 | 1.0528 | 0.7535 |
| 0.4975 | 82.0 | 48380 | 1.0188 | 0.7554 |
| 0.4868 | 83.0 | 48970 | 0.9455 | 0.7596 |
| 0.4661 | 84.0 | 49560 | 0.9841 | 0.7557 |
| 0.4765 | 85.0 | 50150 | 0.9570 | 0.7538 |
| 0.4732 | 86.0 | 50740 | 1.0383 | 0.7535 |
| 0.4846 | 87.0 | 51330 | 0.9560 | 0.7587 |
| 0.4641 | 88.0 | 51920 | 0.9716 | 0.7578 |
| 0.477 | 89.0 | 52510 | 0.9581 | 0.7606 |
| 0.4567 | 90.0 | 53100 | 0.9674 | 0.7569 |
| 0.4567 | 91.0 | 53690 | 0.9718 | 0.7587 |
| 0.4676 | 92.0 | 54280 | 0.9535 | 0.7520 |
| 0.4532 | 93.0 | 54870 | 0.9593 | 0.7563 |
| 0.4727 | 94.0 | 55460 | 0.9611 | 0.7584 |
| 0.4535 | 95.0 | 56050 | 0.9539 | 0.7602 |
| 0.4569 | 96.0 | 56640 | 0.9506 | 0.7587 |
| 0.4417 | 97.0 | 57230 | 0.9616 | 0.7584 |
| 0.4314 | 98.0 | 57820 | 0.9488 | 0.7593 |
| 0.4318 | 99.0 | 58410 | 0.9439 | 0.7587 |
| 0.4415 | 100.0 | 59000 | 0.9536 | 0.7596 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
krishanusinha20/marketmail
|
krishanusinha20
| 2023-09-09T04:28:09Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T05:47:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
xiaol/RWKV-claude-4-World-7B-65k
|
xiaol
| 2023-09-09T04:26:25Z | 0 | 52 | null |
[
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:OpenLeecher/Teatime",
"license:apache-2.0",
"region:us"
] | null | 2023-08-05T08:07:49Z |
---
license: apache-2.0
datasets:
- Norquinal/claude_multiround_chat_30k
- OpenLeecher/Teatime
---
# RWKV role play model
## According our community users, this model is better than claude2.
This is a model trained based on RWKV world 7B model with 65336 context, which can do claude-like task.
Good at novel, role play and multi turn chat.
You can test this model in this buggy UI: https://rwkv.ai-creator.net/risu or https://rwkv.ai-creator.net/st ,API hosted by RWKV Runner, remember frequency penalty is sensitive and fixed a lot of repeating.
and Use temp 0.1 ,topp 0.7 could have better results.
# other:
if you use RWKV runner as API,
https://github.com/josStorer/RWKV-Runner/blob/a057bb6c5bebc346a50ae746f2b10000627552b0/backend-python/routes/completion.py#L52C29-L52C29
change user_name,assistant_name to User,Assistant to replace default Question,Answer, due to the finetune format


also you can do multi-lang with RWKV Runner






|
minfeng-ai/ppo-Huggy
|
minfeng-ai
| 2023-09-09T04:22:54Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-09T04:22:48Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: minfeng-ai/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
HaohuaLv/mt5_large-lora_rank_16-yue_zh_translation
|
HaohuaLv
| 2023-09-09T04:22:32Z | 3 | 1 |
peft
|
[
"peft",
"text2text-generation",
"zh",
"license:openrail",
"region:us"
] |
text2text-generation
| 2023-09-09T04:00:05Z |
---
license: openrail
language:
- zh
metrics:
- sacrebleu
library_name: peft
pipeline_tag: text2text-generation
---
A lora based on google/mt5-large a fine-tunes on indiejoseph/yue-zh-translation dataset.
It would translate Mandarin to Cantonese. e.g.
input: `translate Mandarin to Cantonese: ๆ้ฝไธ็ฅ้ไฝ ๅจ่ฏดไปไน`
output: `ๆ้ฝๅ็ฅไฝ ่ฌๅฉ`
input: `translate Mandarin to Cantonese: ๆดๅคฉๅฐฑ็ฅ้ๆๆธธๆ`
output: `ๆๆฅๅฐฑ็ฅๆ้ๆฒ`
|
CRD716/ggml-LLaMa-65B-quantized
|
CRD716
| 2023-09-09T03:17:19Z | 0 | 30 | null |
[
"LLaMa",
"text-generation-inference",
"ggml",
"text-generation",
"en",
"bg",
"ca",
"cs",
"da",
"de",
"es",
"fr",
"hr",
"hu",
"it",
"nl",
"pl",
"pt",
"ro",
"ru",
"sl",
"sr",
"sv",
"uk",
"license:gpl-3.0",
"region:us"
] |
text-generation
| 2023-04-07T18:33:27Z |
---
license: gpl-3.0
metrics:
- perplexity
pipeline_tag: text-generation
tags:
- LLaMa
- text-generation-inference
- ggml
language:
- en
- bg
- ca
- cs
- da
- de
- es
- fr
- hr
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- sr
- sv
- uk
---
NOTE: DEPRECIATED, BETTER PEOPLE DO THIS NOW
LLaMa 65B converted to ggml via LLaMa.cpp, then quantized to 4bit.
Legacy is for llama.cpp setups older than https://github.com/ggerganov/llama.cpp/pull/1508, the regular is faster but does not work on old versions.
I recommend the following settings when running as a good starting point:
```main.exe -m ggml-LLaMa-65B-q4_0.bin -n -1 -t 32 -c 2048 --temp 0.7 --repeat_penalty 1.2 --mirostat 2 --interactive-first --color```
Be aware that LLaMa is a text generation model, not a conversational one, and as such you will have to prompt it differently than, for example, Vicuna or ChatGPT.
|
Onutoa/1_1e-2_1_0.5
|
Onutoa
| 2023-09-09T02:37:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-08T23:38:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_1e-2_1_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_1e-2_1_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4701
- Accuracy: 0.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2311 | 1.0 | 590 | 1.5093 | 0.6217 |
| 1.0444 | 2.0 | 1180 | 0.5788 | 0.6196 |
| 0.9287 | 3.0 | 1770 | 1.3468 | 0.6217 |
| 0.8066 | 4.0 | 2360 | 0.7094 | 0.6217 |
| 0.6756 | 5.0 | 2950 | 0.5829 | 0.6486 |
| 0.5869 | 6.0 | 3540 | 0.5398 | 0.6670 |
| 0.5733 | 7.0 | 4130 | 0.6279 | 0.5716 |
| 0.5229 | 8.0 | 4720 | 0.4543 | 0.7061 |
| 0.4998 | 9.0 | 5310 | 0.4906 | 0.6685 |
| 0.476 | 10.0 | 5900 | 0.5972 | 0.6927 |
| 0.4498 | 11.0 | 6490 | 0.4602 | 0.7049 |
| 0.4082 | 12.0 | 7080 | 0.4432 | 0.7012 |
| 0.4072 | 13.0 | 7670 | 0.4585 | 0.6963 |
| 0.3746 | 14.0 | 8260 | 0.4281 | 0.7312 |
| 0.3652 | 15.0 | 8850 | 0.4691 | 0.7294 |
| 0.3505 | 16.0 | 9440 | 0.4156 | 0.7303 |
| 0.3375 | 17.0 | 10030 | 0.4299 | 0.7275 |
| 0.3298 | 18.0 | 10620 | 0.4948 | 0.7 |
| 0.3056 | 19.0 | 11210 | 0.4208 | 0.7275 |
| 0.2956 | 20.0 | 11800 | 0.4474 | 0.7324 |
| 0.2859 | 21.0 | 12390 | 0.5893 | 0.6746 |
| 0.2807 | 22.0 | 12980 | 0.4613 | 0.7291 |
| 0.2566 | 23.0 | 13570 | 0.4610 | 0.7235 |
| 0.249 | 24.0 | 14160 | 0.5434 | 0.7413 |
| 0.2391 | 25.0 | 14750 | 0.5110 | 0.7333 |
| 0.2421 | 26.0 | 15340 | 0.6915 | 0.6465 |
| 0.2556 | 27.0 | 15930 | 0.4759 | 0.7306 |
| 0.2271 | 28.0 | 16520 | 0.4690 | 0.7321 |
| 0.2295 | 29.0 | 17110 | 0.5012 | 0.7376 |
| 0.2283 | 30.0 | 17700 | 0.5150 | 0.7128 |
| 0.2054 | 31.0 | 18290 | 0.4737 | 0.7343 |
| 0.2157 | 32.0 | 18880 | 0.6032 | 0.7327 |
| 0.215 | 33.0 | 19470 | 0.4818 | 0.7297 |
| 0.196 | 34.0 | 20060 | 0.4894 | 0.7147 |
| 0.2001 | 35.0 | 20650 | 0.5326 | 0.7193 |
| 0.1955 | 36.0 | 21240 | 0.4826 | 0.7413 |
| 0.1947 | 37.0 | 21830 | 0.4625 | 0.7385 |
| 0.1912 | 38.0 | 22420 | 0.4764 | 0.7492 |
| 0.1946 | 39.0 | 23010 | 0.5615 | 0.7443 |
| 0.1898 | 40.0 | 23600 | 0.4870 | 0.7413 |
| 0.1789 | 41.0 | 24190 | 0.5526 | 0.7462 |
| 0.1803 | 42.0 | 24780 | 0.5021 | 0.7217 |
| 0.1708 | 43.0 | 25370 | 0.4751 | 0.7379 |
| 0.1835 | 44.0 | 25960 | 0.4738 | 0.7355 |
| 0.1738 | 45.0 | 26550 | 0.4759 | 0.7336 |
| 0.1726 | 46.0 | 27140 | 0.4928 | 0.7367 |
| 0.1756 | 47.0 | 27730 | 0.5380 | 0.7193 |
| 0.1617 | 48.0 | 28320 | 0.5119 | 0.7327 |
| 0.1725 | 49.0 | 28910 | 0.4884 | 0.7431 |
| 0.1643 | 50.0 | 29500 | 0.4968 | 0.7382 |
| 0.1593 | 51.0 | 30090 | 0.4708 | 0.7281 |
| 0.1645 | 52.0 | 30680 | 0.4943 | 0.7364 |
| 0.1566 | 53.0 | 31270 | 0.4820 | 0.7446 |
| 0.1555 | 54.0 | 31860 | 0.5117 | 0.7376 |
| 0.1584 | 55.0 | 32450 | 0.5269 | 0.7410 |
| 0.1587 | 56.0 | 33040 | 0.4650 | 0.7394 |
| 0.1527 | 57.0 | 33630 | 0.5007 | 0.7431 |
| 0.157 | 58.0 | 34220 | 0.4689 | 0.7413 |
| 0.1527 | 59.0 | 34810 | 0.4960 | 0.7306 |
| 0.1461 | 60.0 | 35400 | 0.5033 | 0.7416 |
| 0.1506 | 61.0 | 35990 | 0.4817 | 0.7459 |
| 0.153 | 62.0 | 36580 | 0.4782 | 0.7422 |
| 0.1417 | 63.0 | 37170 | 0.4808 | 0.7410 |
| 0.1477 | 64.0 | 37760 | 0.5090 | 0.7358 |
| 0.1467 | 65.0 | 38350 | 0.5180 | 0.7419 |
| 0.1416 | 66.0 | 38940 | 0.5055 | 0.7483 |
| 0.1407 | 67.0 | 39530 | 0.4779 | 0.7416 |
| 0.1407 | 68.0 | 40120 | 0.4661 | 0.7401 |
| 0.1379 | 69.0 | 40710 | 0.5172 | 0.7450 |
| 0.1432 | 70.0 | 41300 | 0.4883 | 0.7422 |
| 0.1455 | 71.0 | 41890 | 0.4853 | 0.7382 |
| 0.1348 | 72.0 | 42480 | 0.4934 | 0.7465 |
| 0.134 | 73.0 | 43070 | 0.4773 | 0.7462 |
| 0.1323 | 74.0 | 43660 | 0.5033 | 0.7428 |
| 0.1356 | 75.0 | 44250 | 0.5184 | 0.7483 |
| 0.1321 | 76.0 | 44840 | 0.4860 | 0.7382 |
| 0.1328 | 77.0 | 45430 | 0.4800 | 0.7422 |
| 0.1334 | 78.0 | 46020 | 0.4668 | 0.7489 |
| 0.128 | 79.0 | 46610 | 0.4930 | 0.7498 |
| 0.1315 | 80.0 | 47200 | 0.4808 | 0.7410 |
| 0.1236 | 81.0 | 47790 | 0.4718 | 0.7456 |
| 0.1286 | 82.0 | 48380 | 0.4723 | 0.7413 |
| 0.1264 | 83.0 | 48970 | 0.4987 | 0.7480 |
| 0.1273 | 84.0 | 49560 | 0.4582 | 0.7492 |
| 0.1243 | 85.0 | 50150 | 0.4713 | 0.7471 |
| 0.1286 | 86.0 | 50740 | 0.4913 | 0.7437 |
| 0.1186 | 87.0 | 51330 | 0.4953 | 0.7495 |
| 0.1194 | 88.0 | 51920 | 0.4805 | 0.7486 |
| 0.118 | 89.0 | 52510 | 0.4799 | 0.7474 |
| 0.1236 | 90.0 | 53100 | 0.4829 | 0.7471 |
| 0.1201 | 91.0 | 53690 | 0.4736 | 0.7474 |
| 0.1235 | 92.0 | 54280 | 0.4695 | 0.7431 |
| 0.1214 | 93.0 | 54870 | 0.4781 | 0.7446 |
| 0.1188 | 94.0 | 55460 | 0.4701 | 0.7456 |
| 0.1191 | 95.0 | 56050 | 0.4681 | 0.7456 |
| 0.1144 | 96.0 | 56640 | 0.4737 | 0.7453 |
| 0.1212 | 97.0 | 57230 | 0.4736 | 0.7446 |
| 0.1152 | 98.0 | 57820 | 0.4668 | 0.7410 |
| 0.1153 | 99.0 | 58410 | 0.4743 | 0.7437 |
| 0.1194 | 100.0 | 59000 | 0.4701 | 0.7431 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
HeshamHaroon/falcon-rw-1b-4bit
|
HeshamHaroon
| 2023-09-09T02:36:27Z | 115 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"falcon",
"text-generation",
"text-generation-inference",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-08-24T03:24:43Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
---
# GPTQ Algorithm with `auto-gptq` Integration
## Model Description
The GPTQ algorithm, developed by Frantar et al., is designed to compress transformer-based language models into fewer bits with minimal performance degradation. The `auto-gptq` library, based on the GPTQ algorithm, has been seamlessly integrated into the ๐ค transformers, enabling users to load and work with models quantized using the GPTQ algorithm.
## Features
- **Quantization**: Compress transformer-based language models with minimal performance loss.
- **Integration with ๐ค transformers**: Directly load models quantized with the GPTQ algorithm.
- **Flexibility**: Offers two scenarios for users:
1. Quantize a language model from scratch.
2. Load a pre-quantized model from the ๐ค Hub.
- **Calibration**: Uses model inference to calibrate the quantized weights, ensuring optimal performance.
- **Custom Dataset Support**: Users can quantize models using either a supported dataset or a custom dataset.
## Intended Use
This integration is intended for users who want to compress their transformer-based language models without significant performance loss. It's especially useful for deployment scenarios where model size is a constraint.
## Limitations and Considerations
- The quality of quantization may vary based on the dataset used for calibration. It's recommended to use a dataset closely related to the model's domain for best results.
- While the GPTQ algorithm minimizes performance degradation, some loss in performance is expected, especially at lower bit quantizations.
## Training Data
The GPTQ algorithm requires calibration data for optimal quantization. Users can either use supported datasets like "c4", "wikitext2", etc., or provide a custom dataset for calibration.
## Evaluation Results
Performance after quantization may vary based on the dataset used for calibration and the bit precision chosen for quantization. It's recommended to evaluate the quantized model on relevant tasks to ensure it meets the desired performance criteria.
## References
- Frantar et al., "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers"
- [AutoGPTQ GitHub Repository](https://github.com/PanQiWei/AutoGPTQ)
|
OttoYu/Tree-Inspection
|
OttoYu
| 2023-09-09T02:13:13Z | 180 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:OttoYu/autotrain-data-tree-inspection",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-09T02:07:18Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- OttoYu/autotrain-data-tree-inspection
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 2.1481896644746374
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 87833143598
- CO2 Emissions (in grams): 2.1482
## Validation Metrics
- Loss: 1.251
- Accuracy: 0.652
- Macro F1: 0.594
- Micro F1: 0.652
- Weighted F1: 0.620
- Macro Precision: 0.629
- Micro Precision: 0.652
- Weighted Precision: 0.642
- Macro Recall: 0.617
- Micro Recall: 0.652
- Weighted Recall: 0.652
|
AlienKevin/whisper-tiny-jyutping-without-tones
|
AlienKevin
| 2023-09-09T01:56:07Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"yue",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-09T01:55:37Z |
---
language:
- yue
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Tiny Jyutping without Tones
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Jyutping without Tones
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 14.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2079
- Wer: 22.8645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- training_steps: 800
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1622 | 0.62 | 800 | 0.2079 | 22.8645 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Onutoa/1_8e-3_5_0.5
|
Onutoa
| 2023-09-09T01:48:23Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-08T22:48:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_8e-3_5_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_8e-3_5_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9097
- Accuracy: 0.7502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.008
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.7895 | 1.0 | 590 | 1.8785 | 0.6150 |
| 2.562 | 2.0 | 1180 | 2.8327 | 0.4046 |
| 2.4023 | 3.0 | 1770 | 2.0853 | 0.5217 |
| 2.3167 | 4.0 | 2360 | 1.5879 | 0.6505 |
| 2.161 | 5.0 | 2950 | 1.9917 | 0.4914 |
| 1.794 | 6.0 | 3540 | 2.5834 | 0.5110 |
| 1.9698 | 7.0 | 4130 | 3.1462 | 0.4927 |
| 1.5971 | 8.0 | 4720 | 1.6865 | 0.5966 |
| 1.5201 | 9.0 | 5310 | 3.4553 | 0.6413 |
| 1.5841 | 10.0 | 5900 | 3.1799 | 0.6327 |
| 1.5231 | 11.0 | 6490 | 1.1451 | 0.6933 |
| 1.3941 | 12.0 | 7080 | 1.1390 | 0.6884 |
| 1.3679 | 13.0 | 7670 | 1.4767 | 0.6902 |
| 1.2653 | 14.0 | 8260 | 1.5274 | 0.7028 |
| 1.2451 | 15.0 | 8850 | 1.6725 | 0.7073 |
| 1.255 | 16.0 | 9440 | 1.5284 | 0.7012 |
| 1.184 | 17.0 | 10030 | 1.0831 | 0.6979 |
| 1.1215 | 18.0 | 10620 | 2.0515 | 0.5755 |
| 1.0766 | 19.0 | 11210 | 1.1808 | 0.7263 |
| 1.1108 | 20.0 | 11800 | 1.0647 | 0.7190 |
| 1.0272 | 21.0 | 12390 | 1.2527 | 0.6654 |
| 1.036 | 22.0 | 12980 | 1.1910 | 0.6783 |
| 0.9735 | 23.0 | 13570 | 1.0311 | 0.7037 |
| 0.9167 | 24.0 | 14160 | 0.9997 | 0.7021 |
| 0.8494 | 25.0 | 14750 | 1.0338 | 0.7284 |
| 0.8461 | 26.0 | 15340 | 1.4642 | 0.6495 |
| 0.8466 | 27.0 | 15930 | 0.9877 | 0.7370 |
| 0.8498 | 28.0 | 16520 | 0.9401 | 0.7287 |
| 0.7851 | 29.0 | 17110 | 1.0208 | 0.7336 |
| 0.7796 | 30.0 | 17700 | 0.9350 | 0.7232 |
| 0.7725 | 31.0 | 18290 | 1.4097 | 0.7162 |
| 0.7599 | 32.0 | 18880 | 1.1313 | 0.7333 |
| 0.768 | 33.0 | 19470 | 1.0272 | 0.7379 |
| 0.7007 | 34.0 | 20060 | 0.9294 | 0.7364 |
| 0.6718 | 35.0 | 20650 | 0.9347 | 0.7330 |
| 0.6786 | 36.0 | 21240 | 1.0231 | 0.7416 |
| 0.6822 | 37.0 | 21830 | 0.9767 | 0.7413 |
| 0.6667 | 38.0 | 22420 | 0.9351 | 0.7272 |
| 0.6497 | 39.0 | 23010 | 0.9574 | 0.7355 |
| 0.638 | 40.0 | 23600 | 1.0610 | 0.7437 |
| 0.6468 | 41.0 | 24190 | 1.1462 | 0.7434 |
| 0.6046 | 42.0 | 24780 | 0.9750 | 0.7211 |
| 0.6079 | 43.0 | 25370 | 1.2040 | 0.7419 |
| 0.5806 | 44.0 | 25960 | 1.1603 | 0.7018 |
| 0.5753 | 45.0 | 26550 | 1.0639 | 0.7110 |
| 0.5693 | 46.0 | 27140 | 1.0966 | 0.7422 |
| 0.5757 | 47.0 | 27730 | 1.0137 | 0.7468 |
| 0.5692 | 48.0 | 28320 | 0.9476 | 0.7382 |
| 0.5732 | 49.0 | 28910 | 1.0004 | 0.7291 |
| 0.5563 | 50.0 | 29500 | 0.9870 | 0.7394 |
| 0.5217 | 51.0 | 30090 | 0.9681 | 0.7312 |
| 0.5239 | 52.0 | 30680 | 0.9812 | 0.7456 |
| 0.525 | 53.0 | 31270 | 1.0355 | 0.7196 |
| 0.5136 | 54.0 | 31860 | 0.9161 | 0.7385 |
| 0.5249 | 55.0 | 32450 | 1.0093 | 0.7382 |
| 0.5092 | 56.0 | 33040 | 1.0072 | 0.7428 |
| 0.4754 | 57.0 | 33630 | 1.0560 | 0.7425 |
| 0.4716 | 58.0 | 34220 | 0.9922 | 0.7425 |
| 0.4913 | 59.0 | 34810 | 1.0014 | 0.7480 |
| 0.4773 | 60.0 | 35400 | 0.9148 | 0.7352 |
| 0.4725 | 61.0 | 35990 | 0.9691 | 0.7474 |
| 0.4656 | 62.0 | 36580 | 0.9459 | 0.7453 |
| 0.4565 | 63.0 | 37170 | 0.9521 | 0.7388 |
| 0.4502 | 64.0 | 37760 | 1.0172 | 0.7474 |
| 0.4765 | 65.0 | 38350 | 0.9504 | 0.7327 |
| 0.4439 | 66.0 | 38940 | 0.9998 | 0.7443 |
| 0.4424 | 67.0 | 39530 | 1.0985 | 0.7498 |
| 0.4541 | 68.0 | 40120 | 0.9088 | 0.7446 |
| 0.4321 | 69.0 | 40710 | 0.9322 | 0.7379 |
| 0.4346 | 70.0 | 41300 | 1.0028 | 0.7495 |
| 0.4329 | 71.0 | 41890 | 0.8949 | 0.7385 |
| 0.4344 | 72.0 | 42480 | 0.9631 | 0.7544 |
| 0.4111 | 73.0 | 43070 | 0.9800 | 0.7272 |
| 0.4183 | 74.0 | 43660 | 1.1350 | 0.7541 |
| 0.4234 | 75.0 | 44250 | 0.9444 | 0.7511 |
| 0.4297 | 76.0 | 44840 | 0.9584 | 0.7526 |
| 0.4172 | 77.0 | 45430 | 0.9165 | 0.7413 |
| 0.4083 | 78.0 | 46020 | 0.9103 | 0.7401 |
| 0.4078 | 79.0 | 46610 | 0.9100 | 0.7468 |
| 0.3977 | 80.0 | 47200 | 0.9172 | 0.7480 |
| 0.3885 | 81.0 | 47790 | 0.9714 | 0.7523 |
| 0.4012 | 82.0 | 48380 | 1.0683 | 0.7547 |
| 0.3831 | 83.0 | 48970 | 0.9867 | 0.7575 |
| 0.3878 | 84.0 | 49560 | 0.9245 | 0.7541 |
| 0.3841 | 85.0 | 50150 | 0.9662 | 0.7327 |
| 0.3835 | 86.0 | 50740 | 0.9532 | 0.7505 |
| 0.3755 | 87.0 | 51330 | 0.9645 | 0.7492 |
| 0.379 | 88.0 | 51920 | 0.9183 | 0.7483 |
| 0.38 | 89.0 | 52510 | 0.9787 | 0.7523 |
| 0.37 | 90.0 | 53100 | 0.9205 | 0.7443 |
| 0.368 | 91.0 | 53690 | 0.9236 | 0.7446 |
| 0.3737 | 92.0 | 54280 | 0.9023 | 0.7419 |
| 0.3663 | 93.0 | 54870 | 0.9200 | 0.7514 |
| 0.3763 | 94.0 | 55460 | 0.9496 | 0.7517 |
| 0.3635 | 95.0 | 56050 | 0.9487 | 0.7508 |
| 0.3656 | 96.0 | 56640 | 0.9122 | 0.7502 |
| 0.3604 | 97.0 | 57230 | 0.9036 | 0.7498 |
| 0.3475 | 98.0 | 57820 | 0.9054 | 0.7474 |
| 0.3552 | 99.0 | 58410 | 0.9078 | 0.7471 |
| 0.3564 | 100.0 | 59000 | 0.9097 | 0.7502 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
kaungmyat/translation
|
kaungmyat
| 2023-09-09T01:33:31Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-08T16:35:20Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: translation
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 5.6441
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6122
- Bleu: 5.6441
- Gen Len: 17.5838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8593 | 1.0 | 6355 | 1.6362 | 5.4979 | 17.59 |
| 1.8198 | 2.0 | 12710 | 1.6122 | 5.6441 | 17.5838 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
FunkEngine/SchweinZwei-13b
|
FunkEngine
| 2023-09-09T01:20:57Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"dataset:SchweinZwei/PIPPA",
"dataset:Open-Orca/OpenOrca",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"dataset:databricks/databricks-dolly-15k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-08T09:56:32Z |
---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
license: llama2
datasets:
- SchweinZwei/PIPPA
- Open-Orca/OpenOrca
- Norquinal/claude_multiround_chat_30k
- jondurbin/airoboros-gpt4-1.4.1
- databricks/databricks-dolly-15k
---
<h1 style="text-align: center">SchweinZwei/SchweinZwei-13b</h1>
<h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2>
## Model Details
The long-awaited release of our new models based on Llama-2 is finally here. SchweinZwei-13b (formerly known as Metharme) is based on
[Llama-2 13B](https://huggingface.co/meta-llama/llama-2-13b-hf) released by Meta AI.
The Metharme models were an experiment to try and get a model that is usable for conversation, roleplaying and storywriting,
but which can be guided using natural language like other instruct models. After much deliberation, we reached the conclusion
that the Metharme prompting format is superior (and easier to use) compared to the classic Schweinen.
This model was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories
and conversations with synthetically generated instructions attached.
This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
## Prompting
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
form a conversation history.
### Prompting example
The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
```
## Dataset
The dataset used to fine-tune this model includes our own [PIPPA], along with several other instruction
datasets, and datasets acquired from various RP forums.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
## Acknowledgements
We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model.
|
Cohee/distilbert-base-uncased-go-emotions-onnx
|
Cohee
| 2023-09-09T01:19:25Z | 12,784 | 6 |
transformers
|
[
"transformers",
"onnx",
"distilbert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-09T01:11:39Z |
---
license: mit
---
[joeddav/distilbert-base-uncased-go-emotions-student](https://huggingface.co/joeddav/distilbert-base-uncased-go-emotions-student) converted to ONNX and quantized using optimum.
---
# distilbert-base-uncased-go-emotions-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset using [this
script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation).
It was trained with mixed precision for 10 epochs and otherwise used the default script arguments.
## Intended Usage
The model can be used like any other model trained on GoEmotions, but will likely not perform as well as a model
trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model
can be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data. Note
that although the GoEmotions dataset allow multiple labels per instance, the teacher used single-label
classification to create psuedo-labels.
|
nitikaverma26/Reinforce-Pixelcopter-PLE-v0
|
nitikaverma26
| 2023-09-09T01:01:50Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-09T01:01:45Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 81.80 +/- 50.82
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
murali1986/murali-test1
|
murali1986
| 2023-09-09T00:43:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-06T20:11:04Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Murali_test Dreambooth model trained by murali1986 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AlienKevin/whisper-small-jyutping-without-tones
|
AlienKevin
| 2023-09-08T23:54:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"yue",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-08T23:53:39Z |
---
language:
- yue
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: Whisper Small Jyutping without Tones
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Jyutping without Tones
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 14.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0701
- eval_wer: 9.8213
- eval_runtime: 1761.3114
- eval_samples_per_second: 1.453
- eval_steps_per_second: 0.182
- epoch: 0.78
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Onutoa/1_1e-2_10_0.1
|
Onutoa
| 2023-09-08T23:37:49Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-08T20:38:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_1e-2_10_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_1e-2_10_0.1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9213
- Accuracy: 0.7489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.8284 | 1.0 | 590 | 2.0796 | 0.6220 |
| 1.4411 | 2.0 | 1180 | 1.1449 | 0.6220 |
| 1.3365 | 3.0 | 1770 | 1.0330 | 0.6217 |
| 1.305 | 4.0 | 2360 | 0.9705 | 0.6349 |
| 1.1782 | 5.0 | 2950 | 0.9411 | 0.6339 |
| 1.1021 | 6.0 | 3540 | 1.4542 | 0.6223 |
| 1.091 | 7.0 | 4130 | 1.3703 | 0.4969 |
| 0.9725 | 8.0 | 4720 | 1.4839 | 0.6425 |
| 0.9313 | 9.0 | 5310 | 0.7887 | 0.7009 |
| 0.8889 | 10.0 | 5900 | 0.8354 | 0.7052 |
| 0.8457 | 11.0 | 6490 | 0.8120 | 0.6807 |
| 0.7264 | 12.0 | 7080 | 0.9915 | 0.6190 |
| 0.7354 | 13.0 | 7670 | 0.7554 | 0.7205 |
| 0.686 | 14.0 | 8260 | 0.8069 | 0.7183 |
| 0.6549 | 15.0 | 8850 | 0.7395 | 0.7379 |
| 0.6278 | 16.0 | 9440 | 0.7282 | 0.7275 |
| 0.5753 | 17.0 | 10030 | 0.9035 | 0.6795 |
| 0.5773 | 18.0 | 10620 | 0.8699 | 0.6887 |
| 0.5437 | 19.0 | 11210 | 0.7501 | 0.7226 |
| 0.5266 | 20.0 | 11800 | 0.9360 | 0.7336 |
| 0.509 | 21.0 | 12390 | 0.8204 | 0.7199 |
| 0.497 | 22.0 | 12980 | 0.7944 | 0.7343 |
| 0.4379 | 23.0 | 13570 | 0.8074 | 0.7147 |
| 0.4276 | 24.0 | 14160 | 0.8147 | 0.7306 |
| 0.4132 | 25.0 | 14750 | 0.8578 | 0.7373 |
| 0.3944 | 26.0 | 15340 | 0.9502 | 0.7015 |
| 0.3845 | 27.0 | 15930 | 0.8962 | 0.7021 |
| 0.3754 | 28.0 | 16520 | 0.8571 | 0.7275 |
| 0.3478 | 29.0 | 17110 | 0.8433 | 0.7373 |
| 0.3561 | 30.0 | 17700 | 0.8819 | 0.7327 |
| 0.3301 | 31.0 | 18290 | 0.8623 | 0.7382 |
| 0.3217 | 32.0 | 18880 | 0.9132 | 0.7419 |
| 0.3182 | 33.0 | 19470 | 0.9184 | 0.7281 |
| 0.2892 | 34.0 | 20060 | 0.8482 | 0.7358 |
| 0.2915 | 35.0 | 20650 | 0.8988 | 0.7474 |
| 0.2816 | 36.0 | 21240 | 0.8834 | 0.7446 |
| 0.2763 | 37.0 | 21830 | 0.9208 | 0.7251 |
| 0.2679 | 38.0 | 22420 | 0.8656 | 0.7379 |
| 0.2785 | 39.0 | 23010 | 0.9177 | 0.7315 |
| 0.2551 | 40.0 | 23600 | 0.9989 | 0.7508 |
| 0.2491 | 41.0 | 24190 | 0.9483 | 0.7505 |
| 0.2482 | 42.0 | 24780 | 0.8921 | 0.7391 |
| 0.2577 | 43.0 | 25370 | 0.9175 | 0.7459 |
| 0.24 | 44.0 | 25960 | 0.9345 | 0.7453 |
| 0.2368 | 45.0 | 26550 | 0.9161 | 0.7428 |
| 0.2261 | 46.0 | 27140 | 0.8859 | 0.7315 |
| 0.2317 | 47.0 | 27730 | 0.8984 | 0.7437 |
| 0.218 | 48.0 | 28320 | 0.8986 | 0.7465 |
| 0.224 | 49.0 | 28910 | 0.8665 | 0.7431 |
| 0.2064 | 50.0 | 29500 | 0.8869 | 0.7492 |
| 0.2163 | 51.0 | 30090 | 0.8786 | 0.7394 |
| 0.2145 | 52.0 | 30680 | 0.9545 | 0.7446 |
| 0.1998 | 53.0 | 31270 | 0.8586 | 0.7462 |
| 0.2008 | 54.0 | 31860 | 0.9008 | 0.7446 |
| 0.1978 | 55.0 | 32450 | 0.9236 | 0.7471 |
| 0.2025 | 56.0 | 33040 | 0.8906 | 0.7474 |
| 0.1903 | 57.0 | 33630 | 0.9517 | 0.7459 |
| 0.1846 | 58.0 | 34220 | 0.9696 | 0.7529 |
| 0.1819 | 59.0 | 34810 | 0.9163 | 0.7419 |
| 0.1883 | 60.0 | 35400 | 0.9419 | 0.7373 |
| 0.1851 | 61.0 | 35990 | 0.9657 | 0.7419 |
| 0.1805 | 62.0 | 36580 | 0.9279 | 0.7413 |
| 0.1866 | 63.0 | 37170 | 0.8996 | 0.7495 |
| 0.1752 | 64.0 | 37760 | 0.9427 | 0.7554 |
| 0.1703 | 65.0 | 38350 | 0.9364 | 0.7379 |
| 0.1702 | 66.0 | 38940 | 0.9546 | 0.7502 |
| 0.1688 | 67.0 | 39530 | 0.9265 | 0.7498 |
| 0.1724 | 68.0 | 40120 | 0.9043 | 0.7446 |
| 0.1635 | 69.0 | 40710 | 0.9426 | 0.7465 |
| 0.1652 | 70.0 | 41300 | 0.9702 | 0.7471 |
| 0.1643 | 71.0 | 41890 | 0.9191 | 0.7379 |
| 0.1684 | 72.0 | 42480 | 0.9362 | 0.7526 |
| 0.1575 | 73.0 | 43070 | 0.9399 | 0.7511 |
| 0.1585 | 74.0 | 43660 | 0.9585 | 0.7483 |
| 0.1551 | 75.0 | 44250 | 0.9481 | 0.7532 |
| 0.1587 | 76.0 | 44840 | 0.9233 | 0.7483 |
| 0.1499 | 77.0 | 45430 | 0.9115 | 0.7508 |
| 0.1541 | 78.0 | 46020 | 0.9531 | 0.7535 |
| 0.1505 | 79.0 | 46610 | 0.9306 | 0.7456 |
| 0.1521 | 80.0 | 47200 | 0.9185 | 0.7535 |
| 0.1448 | 81.0 | 47790 | 0.9228 | 0.7459 |
| 0.1475 | 82.0 | 48380 | 0.9214 | 0.7446 |
| 0.1491 | 83.0 | 48970 | 0.9355 | 0.7465 |
| 0.1433 | 84.0 | 49560 | 0.9403 | 0.7523 |
| 0.1416 | 85.0 | 50150 | 0.9270 | 0.7492 |
| 0.1391 | 86.0 | 50740 | 0.9208 | 0.7517 |
| 0.1391 | 87.0 | 51330 | 0.9134 | 0.7517 |
| 0.1415 | 88.0 | 51920 | 0.9198 | 0.7486 |
| 0.1343 | 89.0 | 52510 | 0.9380 | 0.7483 |
| 0.128 | 90.0 | 53100 | 0.9429 | 0.7505 |
| 0.1328 | 91.0 | 53690 | 0.9211 | 0.7529 |
| 0.1311 | 92.0 | 54280 | 0.9180 | 0.7431 |
| 0.1383 | 93.0 | 54870 | 0.9522 | 0.7535 |
| 0.133 | 94.0 | 55460 | 0.9047 | 0.7486 |
| 0.1331 | 95.0 | 56050 | 0.9339 | 0.7526 |
| 0.1304 | 96.0 | 56640 | 0.9177 | 0.7480 |
| 0.1293 | 97.0 | 57230 | 0.9194 | 0.7471 |
| 0.128 | 98.0 | 57820 | 0.9213 | 0.7492 |
| 0.1268 | 99.0 | 58410 | 0.9260 | 0.7492 |
| 0.1297 | 100.0 | 59000 | 0.9213 | 0.7489 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
speechlessai/speechless-baichuan2-dolphin-orca-platypus-13b
|
speechlessai
| 2023-09-08T23:29:00Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"dataset:ehartford/dolphin",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-08T11:27:45Z |
---
language:
- en
- zh
license: apache-2.0
tasks:
- text-generation
datasets:
- ehartford/dolphin
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
---
<p><h1> speechless-baichuan2-dolphin-orca-platypus-13b </h1></p>
Fine-tune the baichuan-inc/Baichuan2-13B-Base with Dolphin, Orca and Platypus datasets.
| Metric | Value |
| --- | --- |
| ARC | |
| HellaSwag | |
| MMLU | |
| TruthfulQA | |
| Average | |
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">๐ฆGitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">๐ฌWeChat</a>
</div>
<div align="center">
๐ <a href="https://www.baichuan-ai.com/" target="_blank">็พๅทๅคงๆจกๅๅจ็บฟๅฏน่ฏๅนณๅฐ</a> ๅทฒๆญฃๅผๅๅ
ฌไผๅผๆพ ๐
</div>
# ็ฎๅฝ/Table of Contents
- [๐ ๆจกๅไป็ป/Introduction](#Introduction)
- [โ๏ธ ๅฟซ้ๅผๅง/Quick Start](#Start)
- [๐ Benchmark่ฏไผฐ/Benchmark Evaluation](#Benchmark)
- [๐ ๅฃฐๆไธๅ่ฎฎ/Terms and Conditions](#Terms)
# <span id="Introduction">ๆจกๅไป็ป/Introduction</span>
Baichuan 2 ๆฏ[็พๅทๆบ่ฝ]ๆจๅบ็ๆฐไธไปฃๅผๆบๅคง่ฏญ่จๆจกๅ๏ผ้็จ **2.6 ไธไบฟ** Tokens ็้ซ่ดจ้่ฏญๆ่ฎญ็ป๏ผๅจๆๅจ็ไธญๆๅ่ฑๆ benchmark
ไธๅๅๅพๅๅฐบๅฏธๆๅฅฝ็ๆๆใๆฌๆฌกๅๅธๅ
ๅซๆ 7Bใ13B ็ Base ๅ Chat ็ๆฌ๏ผๅนถๆไพไบ Chat ็ๆฌ็ 4bits
้ๅ๏ผๆๆ็ๆฌไธไป
ๅฏนๅญฆๆฏ็ ็ฉถๅฎๅ
จๅผๆพ๏ผๅผๅ่
ไนไป
้[้ฎไปถ็ณ่ฏท]ๅนถ่ทๅพๅฎๆนๅ็จ่ฎธๅฏๅ๏ผๅณๅฏไปฅๅ
่ดนๅ็จใๅ
ทไฝๅๅธ็ๆฌๅไธ่ฝฝ่งไธ่กจ๏ผ
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:opensource@baichuan-inc.com).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">ๅฟซ้ๅผๅง/Quick Start</span>
ๅจBaichuan2็ณปๅๆจกๅไธญ๏ผๆไปฌไธบไบๅ ๅฟซๆจ็้ๅบฆไฝฟ็จไบPytorch2.0ๅ ๅ
ฅ็ๆฐๅ่ฝF.scaled_dot_product_attention๏ผๅ ๆญคๆจกๅ้่ฆๅจPytorch2.0็ฏๅขไธ่ฟ่กใ
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-13B-Base", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-13B-Base", device_map="auto", trust_remote_code=True)
inputs = tokenizer('็ป้นณ้ๆฅผ->็ไนๆถฃ\nๅค้จๅฏๅ->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
# <span id="Benchmark">Benchmark ็ปๆ/Benchmark Evaluation</span>
ๆไปฌๅจ[้็จ]ใ[ๆณๅพ]ใ[ๅป็]ใ[ๆฐๅญฆ]ใ[ไปฃ็ ]ๅ[ๅค่ฏญ่จ็ฟป่ฏ]ๅ
ญไธช้ขๅ็ไธญ่ฑๆๆๅจๆฐๆฎ้ไธๅฏนๆจกๅ่ฟ่กไบๅนฟๆณๆต่ฏ๏ผๆดๅค่ฏฆ็ปๆต่ฏ็ปๆๅฏๆฅ็[GitHub]ใ
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## ่ฎญ็ป่ฟ็จๆจกๅ/Training Dynamics
้คไบ่ฎญ็ปไบ 2.6 ไธไบฟ Tokens ็ [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) ๆจกๅ๏ผๆไปฌ่ฟๆไพไบๅจๆญคไนๅ็ๅฆๅค 11 ไธชไธญ้ด่ฟ็จ็ๆจกๅ๏ผๅๅซๅฏนๅบ่ฎญ็ปไบ็บฆ 0.2 ~ 2.4 ไธไบฟ Tokens๏ผไพ็คพๅบ็ ็ฉถไฝฟ็จ
๏ผ[่ฎญ็ป่ฟ็จcheckpointไธ่ฝฝ](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)๏ผใไธๅพ็ปๅบไบ่ฟไบ checkpoints ๅจ C-EvalใMMLUใCMMLU ไธไธช benchmark ไธ็ๆๆๅๅ๏ผ
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Terms">ๅฃฐๆไธๅ่ฎฎ/Terms and Conditions</span>
## ๅฃฐๆ
ๆไปฌๅจๆญคๅฃฐๆ๏ผๆไปฌ็ๅผๅๅข้ๅนถๆชๅบไบ Baichuan 2 ๆจกๅๅผๅไปปไฝๅบ็จ๏ผๆ ่ฎบๆฏๅจ iOSใAndroidใ็ฝ้กตๆไปปไฝๅ
ถไปๅนณๅฐใๆไปฌๅผบ็ๅผๅๆๆไฝฟ็จ่
๏ผไธ่ฆๅฉ็จ
Baichuan 2 ๆจกๅ่ฟ่กไปปไฝๅฑๅฎณๅฝๅฎถ็คพไผๅฎๅ
จๆ่ฟๆณ็ๆดปๅจใๅฆๅค๏ผๆไปฌไน่ฆๆฑไฝฟ็จ่
ไธ่ฆๅฐ Baichuan 2
ๆจกๅ็จไบๆช็ป้ๅฝๅฎๅ
จๅฎกๆฅๅๅคๆก็ไบ่็ฝๆๅกใๆไปฌๅธๆๆๆ็ไฝฟ็จ่
้ฝ่ฝ้ตๅฎ่ฟไธชๅๅ๏ผ็กฎไฟ็งๆ็ๅๅฑ่ฝๅจ่ง่ๅๅๆณ็็ฏๅขไธ่ฟ่กใ
ๆไปฌๅทฒ็ปๅฐฝๆไปฌๆ่ฝ๏ผๆฅ็กฎไฟๆจกๅ่ฎญ็ป่ฟ็จไธญไฝฟ็จ็ๆฐๆฎ็ๅ่งๆงใ็ถ่๏ผๅฐฝ็ฎกๆไปฌๅทฒ็ปๅๅบไบๅทจๅคง็ๅชๅ๏ผไฝ็ฑไบๆจกๅๅๆฐๆฎ็ๅคๆๆง๏ผไปๆๅฏ่ฝๅญๅจไธไบๆ ๆณ้ข่ง็้ฎ้ขใๅ ๆญค๏ผๅฆๆ็ฑไบไฝฟ็จ
Baichuan 2 ๅผๆบๆจกๅ่ๅฏผ่ด็ไปปไฝ้ฎ้ข๏ผๅ
ๆฌไฝไธ้ไบๆฐๆฎๅฎๅ
จ้ฎ้ขใๅ
ฌๅ
ฑ่่ฎบ้ฃ้ฉ๏ผๆๆจกๅ่ขซ่ฏฏๅฏผใๆปฅ็จใไผ ๆญๆไธๅฝๅฉ็จๆๅธฆๆฅ็ไปปไฝ้ฃ้ฉๅ้ฎ้ข๏ผๆไปฌๅฐไธๆฟๆ
ไปปไฝ่ดฃไปปใ
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## ๅ่ฎฎ
Baichuan 2 ๆจกๅ็็คพๅบไฝฟ็จ้้ตๅพช[ใBaichuan 2 ๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎใ]ใBaichuan 2 ๆฏๆๅ็จใๅฆๆๅฐ Baichuan 2 ๆจกๅๆๅ
ถ่ก็ๅ็จไฝๅไธ็จ้๏ผ่ฏทๆจๆ็
งๅฆไธๆนๅผ่็ณป่ฎธๅฏๆน๏ผไปฅ่ฟ่ก็ป่ฎฐๅนถๅ่ฎธๅฏๆน็ณ่ฏทไนฆ้ขๆๆ๏ผ่็ณป้ฎ็ฎฑ [opensource@baichuan-inc.com]ใ
The use of the source code in this repository follows the open-source license Apache 2.0. Community use of the Baichuan 2 model must adhere to the [Community License for Baichuan 2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). Baichuan 2 supports commercial use. If you are using the Baichuan 2 models or their derivatives for commercial purposes, please contact the licensor in the following manner for registration and to apply for written authorization: Email opensource@baichuan-inc.com.
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[้็จ]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[ๆณๅพ]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[ๅป็]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[ๆฐๅญฆ]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[ไปฃ็ ]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[ๅค่ฏญ่จ็ฟป่ฏ]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[ใBaichuan 2 ๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎใ]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[้ฎไปถ็ณ่ฏท]: mailto:opensource@baichuan-inc.com
[Email]: mailto:opensource@baichuan-inc.com
[opensource@baichuan-inc.com]: mailto:opensource@baichuan-inc.com
[่ฎญ็ป่ฟ็จheckpointไธ่ฝฝ]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[็พๅทๆบ่ฝ]: https://www.baichuan-ai.com
|
Brouz/REMM-PYG-0.65-SLERP
|
Brouz
| 2023-09-08T22:56:11Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-08T19:21:50Z |
---
license: llama2
---
ReMM-SLERP-L2-13B merged with pygmalion-2-13b at 0.65 weight using Ties-Merge with SLERP
13b
|
Brouz/Slerpeno
|
Brouz
| 2023-09-08T22:51:29Z | 1,534 | 4 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-08T00:33:20Z |
---
license: cc-by-4.0
---
Uses the same models Stheno does but merging using SLERP method instead
13B model
|
Fredyco/FIFA
|
Fredyco
| 2023-09-08T22:32:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-08T22:30:43Z |
---
title: Fifa 2022 Model 1
emoji: ๐
colorFrom: pink
colorTo: purple
sdk: streamlit
sdk_version: 1.17.0
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Fredithefish/Guanaco-13B-Uncensored
|
Fredithefish
| 2023-09-08T22:07:16Z | 1,500 | 13 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:Fredithefish/openassistant-guanaco-unfiltered",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-07T12:25:27Z |
---
license: apache-2.0
datasets:
- Fredithefish/openassistant-guanaco-unfiltered
language:
- en
library_name: transformers
pipeline_tag: conversational
inference: false
---
<img src="https://huggingface.co/Fredithefish/Guanaco-3B-Uncensored/resolve/main/Guanaco-Uncensored.jpg" alt="Alt Text" width="295"/>
# โจ Guanaco - 13B - Uncensored โจ
Guanaco-13B-Uncensored has been fine-tuned for 4 epochs on the [Unfiltered Guanaco Dataset.](https://huggingface.co/datasets/Fredithefish/openassistant-guanaco-unfiltered) using [Llama-2-13B](https://hf.co/meta-llama/Llama-2-13b-hf) as the base model.
<br>The model does not perform well with languages other than English.
<br>Please note: This model is designed to provide responses without content filtering or censorship. It generates answers without denials.
## Special thanks
I would like to thank AutoMeta for providing me with the computing power necessary to train this model.
Also thanks to TheBloke for creating [the GGUF](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF) and [the GPTQ](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GPTQ) quantizations for this model
### Prompt Template
```
### Human: {prompt} ### Assistant:
```
### Dataset
The model has been fine-tuned on the V2 of the Guanaco unfiltered dataset.
|
mgmeskill/downstrike-320m
|
mgmeskill
| 2023-09-08T22:04:56Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-09-08T22:02:18Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mgmeskill/downstrike-320m
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
MattStammers/a2c-PandaPickAndPlace-v3
|
MattStammers
| 2023-09-08T22:00:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-08T21:55:15Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -40.00 +/- 20.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mmnga/stockmark-gpt-neox-japanese-1.4b-gguf
|
mmnga
| 2023-09-08T22:00:37Z | 727 | 1 | null |
[
"gguf",
"gpt-neox",
"ja",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-08-22T12:45:18Z |
---
license: mit
language:
- ja
tags:
- gpt-neox
---
# stockmark-gpt-neox-japanese-1.4b-gguf
[stockmarkใใใๅ
ฌ้ใใฆใใgpt-neox-japanese-1.4b](https://huggingface.co/stockmark/gpt-neox-japanese-1.4b)ใฎggufใใฉใผใใใๅคๆ็ใงใใ
ๆณจๆ:ใใกใใฏใใฉใณใใง่ฉฆ็จใซใชใใพใใllama.cppๆฌๅฎถใซgptneoxใๅฎ่ฃ
ใใใๆใซใใใฎggufใใกใคใซใไฝฟ็จใงใใชใๅฏ่ฝๆงใใใใพใใ
***[GitHubใชใใธใใชใฎ readme ใฏใใกใ](https://github.com/mmnga/llama.cpp/tree/mmnga-dev)***
## Usage (่ฉฆ็จ)
```
git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git
cd llama.cpp
make -j
./main -m 'stockmark-gpt-neox-japanese-1.4b-q4_0.gguf' -n 128 -p 'ๅพ่ผฉใฏ็ซใงใใใๅๅใฏๅฎใ่จใใจใ' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1
```
**CUBLAS**
```
LLAMA_CUBLAS=1 make -j
./main -m 'stockmark-gpt-neox-japanese-1.4b-q4_0.gguf' -n 128 -p 'ๅพ่ผฉใฏ็ซใงใใใๅๅใฏๅฎใ่จใใจใ' -ngl 24
```
|
luissgtorres/Bert_sentiment_analysis_Indata
|
luissgtorres
| 2023-09-08T21:39:54Z | 66 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T17:31:11Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Bert_sentiment_analysis_Indata
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_sentiment_analysis_Indata
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.33.0
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
PHL99/Reinforce-Cartpole-v1
|
PHL99
| 2023-09-08T21:39:42Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-08T21:39:31Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Beniuv/a2c-PandaReachDense-v3
|
Beniuv
| 2023-09-08T21:37:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-08T21:31:51Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.17 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rnkVikcdkam/Taxi-v3
|
rnkVikcdkam
| 2023-09-08T21:30:49Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-08T21:30:47Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rnkVikcdkam/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
quantumaikr/falcon-180B-wizard_alpaca_dolly_orca
|
quantumaikr
| 2023-09-08T21:28:51Z | 1,517 | 4 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:tiiuae/falcon-refinedweb",
"dataset:nRuaif/wizard_alpaca_dolly_orca",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-07T16:31:18Z |
---
datasets:
- tiiuae/falcon-refinedweb
- nRuaif/wizard_alpaca_dolly_orca
language:
- en
- de
- es
- fr
inference: false
license: unknown
---
# ๐ฐ๐ท quantumaikr/falcon-180B-wizard_alpaca_dolly_orca
**quantumaikr/falcon-180B-wizard_alpaca_dolly_orca is a 180B parameters causal decoder-only model built by [quantumaikr](https://www.quantumai.kr) based on [Falcon-180B-chat](https://huggingface.co/tiiuae/falcon-180B-chat)**
## How to Get Started with the Model
To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "quantumaikr/falcon-180B-wizard_alpaca_dolly_orca"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Contact
๐ฐ๐ท www.quantumai.kr
๐ฐ๐ท hi@quantumai.kr [์ด๊ฑฐ๋์ธ์ด๋ชจ๋ธ ๊ธฐ์ ๋์
๋ฌธ์ํ์]
|
quantumaikr/falcon-180B-WizardLM_Orca
|
quantumaikr
| 2023-09-08T21:28:26Z | 1,512 | 1 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:tiiuae/falcon-refinedweb",
"dataset:pankajmathur/WizardLM_Orca",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-08T03:47:54Z |
---
datasets:
- tiiuae/falcon-refinedweb
- pankajmathur/WizardLM_Orca
language:
- en
- de
- es
- fr
inference: false
---
# ๐ฐ๐ท quantumaikr/falcon-180B-WizardLM_Orca
**quantumaikr/falcon-180B-WizardLM_Orca is a 180B parameters causal decoder-only model built by [quantumaikr](https://www.quantumai.kr) based on [Falcon-180B-chat](https://huggingface.co/tiiuae/falcon-180B-chat)**
## How to Get Started with the Model
To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "quantumaikr/falcon-180B-WizardLM_Orca"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Contact
๐ฐ๐ท www.quantumai.kr
๐ฐ๐ท hi@quantumai.kr [์ด๊ฑฐ๋์ธ์ด๋ชจ๋ธ ๊ธฐ์ ๋์
๋ฌธ์ํ์]
|
Dischordo/Anime
|
Dischordo
| 2023-09-08T21:20:48Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-09-08T21:12:00Z |
---
license: openrail
---
Nekezuga: Clip Skip 1 capable Manga style model tuned away from bhili styles and more towards retro western tastes.
Preview images are mostly raw at 1024 no upscaling, metadata is left on images.
|
rebolforces/a2c-PandaReachDense-v2g
|
rebolforces
| 2023-09-08T21:17:39Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T05:58:22Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.03 +/- 0.78
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
rebolforces/a2c-PandaReachDense-v2f
|
rebolforces
| 2023-09-08T21:17:25Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-19T05:42:53Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.23 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
ccore/opt-125-smart-test
|
ccore
| 2023-09-08T21:08:41Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T19:21:58Z |
---
license: gpl-3.0
---
hf-causal (pretrained=ccore/opt-125-smart-test), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------|------:|--------|-----:|---|-----:|
|openbookqa| 0|acc |0.1560|ยฑ |0.0162|
| | |acc_norm|0.3420|ยฑ |0.0212|
|piqa | 0|acc |0.6197|ยฑ |0.0113|
| | |acc_norm|0.6023|ยฑ |0.0114|
prompt format:
[INSTRUCTION] what's the capital of Brasil? ?
[RESPONSE]
The capital of Brazil is Brasilia
datasets: OpenOrca, Wizard dataset, custom papers data .
|
OtterDev/otterchat
|
OtterDev
| 2023-09-08T21:05:45Z | 149 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"code",
"dataset:xquad",
"dataset:xquad_r",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-04-25T00:23:19Z |
---
license: apache-2.0
datasets:
- xquad
- xquad_r
tags:
- code
---
# OtterChat
<!-- Provide a quick summary of what the model is/does. -->
OtterChat is a custom-trained model made by me that allows you to ask questions about given data.
## Model Details
- **Developed by:** OtterDev
- **Model type:** Question Answering
|
MattStammers/a2c-PandaReachDense-v3
|
MattStammers
| 2023-09-08T20:52:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-08T20:28:58Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Having some issues with the video but this is a much better robotic reacher - will try to sort later on
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.