modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 06:26:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 538
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Lew/ppo-SnowballTarget
|
Lew
| 2023-12-08T07:10:13Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-12-08T07:10:10Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ð¶ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Lew/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ð
|
multimodalart/polipure-repeats-3-steps-500-rank64-no-prior
|
multimodalart
| 2023-12-08T07:08:08Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:49:13Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-500-rank64-no-prior
<Gallery />
## Model description
### These are multimodalart/polipure-repeats-3-steps-500-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` â use `<s0><s1>` in your prompt
## Use it with the [ð§š diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-500-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-500-rank64-no-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('<s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-500-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-500-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/polipure-repeats-3-steps-500-rank64-no-prior/tree/main).
## Details
The weights were trained using [ð§š diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
hahahafofo/Qwen-1_8B-Stable-Diffusion-Prompt
|
hahahafofo
| 2023-12-08T07:07:48Z | 49 | 12 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-12-08T07:01:59Z |
---
license: apache-2.0
---
æš¡å䜿çšåšçæçš³å®æ©æ£ïŒStable DiffusionïŒçæç€ºè¯ïŒpromptïŒãå®äž»èŠéè¿äžæçæçžåºçè±ææç€ºè¯ïŒå
åå©çšQwençèœåïŒæ¯æå
æ¬å€è¯è¯åšå
çå€ç§åœ¢åŒè¿è¡æç€ºè¯ççæãæ€æš¡ååºäº35k æ¡æ°æ®è¿è¡ç¹å®äºä»»å¡ç埮è°(SFT)
æš¡åç对åºä»£ç å·²ååžäºGitHubïŒæšå¯ä»¥éè¿ä»¥äžéŸæ¥è®¿é®å¹¶äœ¿çšSDXL-Turboè¿è¡å¿«éæµè¯çæçæç€ºè¯ïŒ
[https://github.com/zhongpei/Qwen-SDXL-Turbo.git](https://github.com/zhongpei/Qwen-SDXL-Turbo.git)
This model is designed to generate prompts for Stable Diffusion. It primarily translates Chinese into corresponding English prompts, leveraging the capabilities of Qwen, and supports a variety of formats including classical poetry for prompt generation. The model has been fine-tuned specifically for this task (SFT) using a dataset of 35k entries.
The corresponding code for the model is available on GitHub. You can access and use SDXL-Turbo for rapid test generation of prompts through the following link:
[https://github.com/zhongpei/Qwen-SDXL-Turbo.git](https://github.com/zhongpei/Qwen-SDXL-Turbo.git)
**åæ°è¡šæ ŒïŒParameters TableïŒ:**
| åæ° | åŒ |
| -------------- | -------- |
| æ°æ®éïŒDataïŒ | 35k |
| è®ç»åšæïŒEpochsïŒ | 1 |
| åŠä¹ çïŒLearning RateïŒ | 1e-05 |
| æå€±åŒïŒLossïŒ | 1.03 |
|
prajapatisarvesh/a2c-PandaReachDense-v3
|
prajapatisarvesh
| 2023-12-08T07:02:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T06:54:45Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jinglianfan/fine-tune-bloom560m-lora
|
jinglianfan
| 2023-12-08T06:59:07Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"question-answering",
"dataset:squad_v2",
"arxiv:1910.09700",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"region:us"
] |
question-answering
| 2023-12-08T06:46:48Z |
---
library_name: peft
base_model: bigscience/bloom-560m
datasets:
- squad_v2
pipeline_tag: question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0
|
Mattffc4/q-FrozenLake-v1-4x4-noSlippery
|
Mattffc4
| 2023-12-08T06:42:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T06:42:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Mattffc4/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GOT-HW/12345
|
GOT-HW
| 2023-12-08T06:14:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"chemistry",
"token-classification",
"zh",
"dataset:HuggingFaceH4/no_robots",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2023-08-16T13:40:58Z |
---
license: apache-2.0
datasets:
- HuggingFaceH4/no_robots
language:
- zh
metrics:
- bleurt
library_name: diffusers
pipeline_tag: token-classification
tags:
- chemistry
---
|
Rafaelrosendo1/whisper2
|
Rafaelrosendo1
| 2023-12-08T06:00:26Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-04T11:38:38Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2574
- Wer: 96.9001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1143 | 1.27 | 1000 | 0.2184 | 56.9383 |
| 0.0524 | 2.54 | 2000 | 0.2134 | 54.9835 |
| 0.024 | 3.82 | 3000 | 0.2309 | 61.1703 |
| 0.0076 | 5.09 | 4000 | 0.2574 | 96.9001 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Gizachew/amharic_hate_speech-PEFT
|
Gizachew
| 2023-12-08T05:56:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Davlan/bert-base-multilingual-cased-finetuned-amharic",
"base_model:adapter:Davlan/bert-base-multilingual-cased-finetuned-amharic",
"region:us"
] | null | 2023-12-08T05:56:09Z |
---
library_name: peft
base_model: Davlan/bert-base-multilingual-cased-finetuned-amharic
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0
|
k0x3k/ppo-lunarlander-v2
|
k0x3k
| 2023-12-08T05:54:46Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-07T06:46:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 294.88 +/- 14.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wisejiyoon/bert-finetuned-ner
|
wisejiyoon
| 2023-12-08T05:48:43Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-07T08:14:37Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8597087378640776
- name: Recall
type: recall
value: 0.8941433860652979
- name: F1
type: f1
value: 0.8765880217785844
- name: Accuracy
type: accuracy
value: 0.9760991339759331
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0943
- Precision: 0.8597
- Recall: 0.8941
- F1: 0.8766
- Accuracy: 0.9761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1321 | 1.0 | 1756 | 0.1003 | 0.8010 | 0.8514 | 0.8254 | 0.9687 |
| 0.0654 | 2.0 | 3512 | 0.0927 | 0.8331 | 0.8862 | 0.8588 | 0.9739 |
| 0.0382 | 3.0 | 5268 | 0.0943 | 0.8597 | 0.8941 | 0.8766 | 0.9761 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
omersubasi/xlm-roberta-base-finetuned-panx-de-fr
|
omersubasi
| 2023-12-08T05:42:49Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-08T05:30:57Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2873 | 1.0 | 715 | 0.1802 | 0.8245 |
| 0.1446 | 2.0 | 1430 | 0.1601 | 0.8512 |
| 0.0925 | 3.0 | 2145 | 0.1606 | 0.8620 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu118
- Datasets 1.16.1
- Tokenizers 0.15.0
|
supark0514/peft_model_id
|
supark0514
| 2023-12-08T05:42:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google-bert/bert-base-uncased",
"base_model:adapter:google-bert/bert-base-uncased",
"region:us"
] | null | 2023-12-03T18:01:57Z |
---
library_name: peft
base_model: bert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0
|
pratikthakkar007/fz61-9i4h-j3s8-0
|
pratikthakkar007
| 2023-12-08T05:28:35Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-08T05:27:08Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
KelvinLLL/560m_LORA_TUNING_CAUSAL_LM_10epoch
|
KelvinLLL
| 2023-12-08T05:28:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"region:us"
] | null | 2023-11-14T19:57:40Z |
---
library_name: peft
base_model: bigscience/bloomz-560m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0
|
leejunhyeok/MoMo-70B-LoRA-V1.2_1
|
leejunhyeok
| 2023-12-08T05:27:02Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-12-08T04:43:54Z |
---
license: llama2
language:
- en
library_name: transformers
---
## Dataset Details
### Used Datasets
- Orca-style dataset
- Alpaca-style dataset
- No other dataset was used except for the dataset mentioned above
- No benchmark test set or the training set are used
## Prompt Template
### Alpaca-style
|
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-07
|
alinerodrigues
| 2023-12-08T05:25:14Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-08T00:12:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-07
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1382
- Wer: 0.0921
- Cer: 0.0289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 28.6541 | 1.0 | 86 | 3.2004 | 1.0 | 1.0 |
| 7.0678 | 2.0 | 172 | 2.9330 | 1.0 | 1.0 |
| 3.0229 | 3.0 | 258 | 2.9009 | 1.0 | 1.0 |
| 2.931 | 4.0 | 344 | 2.9000 | 1.0 | 1.0 |
| 2.9177 | 5.0 | 430 | 2.8428 | 1.0 | 1.0 |
| 2.6907 | 6.0 | 516 | 1.2090 | 0.9756 | 0.2837 |
| 1.3248 | 7.0 | 602 | 0.5202 | 0.2478 | 0.0702 |
| 1.3248 | 8.0 | 688 | 0.3687 | 0.1941 | 0.0567 |
| 0.7818 | 9.0 | 774 | 0.3006 | 0.1641 | 0.0482 |
| 0.5879 | 10.0 | 860 | 0.2740 | 0.1586 | 0.0476 |
| 0.5164 | 11.0 | 946 | 0.2360 | 0.1508 | 0.0446 |
| 0.4796 | 12.0 | 1032 | 0.2204 | 0.1385 | 0.0423 |
| 0.4054 | 13.0 | 1118 | 0.2071 | 0.1337 | 0.0399 |
| 0.3639 | 14.0 | 1204 | 0.2009 | 0.1148 | 0.0365 |
| 0.3639 | 15.0 | 1290 | 0.1918 | 0.1201 | 0.0370 |
| 0.3318 | 16.0 | 1376 | 0.1841 | 0.1073 | 0.0342 |
| 0.3192 | 17.0 | 1462 | 0.1804 | 0.1064 | 0.0344 |
| 0.2857 | 18.0 | 1548 | 0.1783 | 0.1030 | 0.0336 |
| 0.2867 | 19.0 | 1634 | 0.1730 | 0.1037 | 0.0331 |
| 0.2703 | 20.0 | 1720 | 0.1643 | 0.0993 | 0.0323 |
| 0.2625 | 21.0 | 1806 | 0.1624 | 0.1030 | 0.0320 |
| 0.2625 | 22.0 | 1892 | 0.1676 | 0.1003 | 0.0319 |
| 0.255 | 23.0 | 1978 | 0.1610 | 0.1003 | 0.0317 |
| 0.241 | 24.0 | 2064 | 0.1628 | 0.0981 | 0.0314 |
| 0.2422 | 25.0 | 2150 | 0.1607 | 0.0967 | 0.0312 |
| 0.2293 | 26.0 | 2236 | 0.1582 | 0.0957 | 0.0311 |
| 0.2222 | 27.0 | 2322 | 0.1550 | 0.0948 | 0.0304 |
| 0.227 | 28.0 | 2408 | 0.1564 | 0.0960 | 0.0310 |
| 0.227 | 29.0 | 2494 | 0.1601 | 0.0984 | 0.0310 |
| 0.2193 | 30.0 | 2580 | 0.1506 | 0.0911 | 0.0293 |
| 0.197 | 31.0 | 2666 | 0.1519 | 0.0948 | 0.0303 |
| 0.2087 | 32.0 | 2752 | 0.1510 | 0.0933 | 0.0304 |
| 0.1868 | 33.0 | 2838 | 0.1515 | 0.0984 | 0.0313 |
| 0.2023 | 34.0 | 2924 | 0.1538 | 0.0906 | 0.0296 |
| 0.206 | 35.0 | 3010 | 0.1473 | 0.0945 | 0.0299 |
| 0.206 | 36.0 | 3096 | 0.1473 | 0.0892 | 0.0292 |
| 0.2013 | 37.0 | 3182 | 0.1418 | 0.0928 | 0.0293 |
| 0.1733 | 38.0 | 3268 | 0.1459 | 0.0882 | 0.0282 |
| 0.1813 | 39.0 | 3354 | 0.1474 | 0.0904 | 0.0288 |
| 0.1903 | 40.0 | 3440 | 0.1423 | 0.0916 | 0.0294 |
| 0.1599 | 41.0 | 3526 | 0.1444 | 0.0914 | 0.0292 |
| 0.1576 | 42.0 | 3612 | 0.1439 | 0.0904 | 0.0287 |
| 0.1576 | 43.0 | 3698 | 0.1495 | 0.0882 | 0.0287 |
| 0.1603 | 44.0 | 3784 | 0.1407 | 0.0909 | 0.0294 |
| 0.1726 | 45.0 | 3870 | 0.1413 | 0.0897 | 0.0289 |
| 0.1582 | 46.0 | 3956 | 0.1416 | 0.0892 | 0.0284 |
| 0.1622 | 47.0 | 4042 | 0.1416 | 0.0894 | 0.0280 |
| 0.1581 | 48.0 | 4128 | 0.1404 | 0.0887 | 0.0284 |
| 0.1606 | 49.0 | 4214 | 0.1396 | 0.0894 | 0.0284 |
| 0.1422 | 50.0 | 4300 | 0.1458 | 0.0892 | 0.0281 |
| 0.1422 | 51.0 | 4386 | 0.1411 | 0.0919 | 0.0293 |
| 0.1611 | 52.0 | 4472 | 0.1417 | 0.0919 | 0.0293 |
| 0.1354 | 53.0 | 4558 | 0.1462 | 0.0909 | 0.0291 |
| 0.1433 | 54.0 | 4644 | 0.1382 | 0.0921 | 0.0289 |
| 0.145 | 55.0 | 4730 | 0.1424 | 0.0948 | 0.0293 |
| 0.1418 | 56.0 | 4816 | 0.1441 | 0.0921 | 0.0295 |
| 0.1544 | 57.0 | 4902 | 0.1435 | 0.0950 | 0.0293 |
| 0.1544 | 58.0 | 4988 | 0.1437 | 0.0940 | 0.0292 |
| 0.1342 | 59.0 | 5074 | 0.1468 | 0.0921 | 0.0289 |
| 0.1398 | 60.0 | 5160 | 0.1491 | 0.0923 | 0.0293 |
| 0.1325 | 61.0 | 5246 | 0.1451 | 0.0919 | 0.0293 |
| 0.1496 | 62.0 | 5332 | 0.1466 | 0.0926 | 0.0290 |
| 0.1283 | 63.0 | 5418 | 0.1447 | 0.0892 | 0.0286 |
| 0.1275 | 64.0 | 5504 | 0.1431 | 0.0868 | 0.0276 |
| 0.1275 | 65.0 | 5590 | 0.1452 | 0.0856 | 0.0271 |
| 0.126 | 66.0 | 5676 | 0.1463 | 0.0919 | 0.0284 |
| 0.1404 | 67.0 | 5762 | 0.1470 | 0.0911 | 0.0283 |
| 0.1446 | 68.0 | 5848 | 0.1444 | 0.0914 | 0.0282 |
| 0.1357 | 69.0 | 5934 | 0.1443 | 0.0906 | 0.0275 |
| 0.1269 | 70.0 | 6020 | 0.1440 | 0.0894 | 0.0276 |
| 0.1333 | 71.0 | 6106 | 0.1457 | 0.0902 | 0.0275 |
| 0.1333 | 72.0 | 6192 | 0.1454 | 0.0880 | 0.0278 |
| 0.1355 | 73.0 | 6278 | 0.1407 | 0.0890 | 0.0275 |
| 0.1108 | 74.0 | 6364 | 0.1428 | 0.0928 | 0.0287 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.13.3
|
agni1024/yolos-pocket
|
agni1024
| 2023-12-08T05:22:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"yolos",
"object-detection",
"YOLOS",
"Object detection",
"en",
"dataset:detection-datasets/fashionpedia",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-12-06T09:28:48Z |
---
datasets:
- detection-datasets/fashionpedia
language:
- en
pipeline_tag: object-detection
tags:
- YOLOS
- Object detection
---
This is a fine-tunned object detection model for only detecting pocket.
this model supports the following categories:
CATS = ['pocket']
|
msyed123/ppo-LunarLander-v2
|
msyed123
| 2023-12-08T04:54:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T04:54:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 180.88 +/- 107.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hungphongtrn/phobert-base-v2-VietMed-v3
|
hungphongtrn
| 2023-12-08T04:40:45Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-08T03:20:31Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
model-index:
- name: phobert-base-v2-VietMed-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-base-v2-VietMed-v3
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
brettbbb/mc_cot_128
|
brettbbb
| 2023-12-08T04:40:41Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2023-12-08T04:05:07Z |
---
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- generated_from_trainer
model-index:
- name: mc_cot_128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mc_cot_128
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
0xkhairi/out
|
0xkhairi
| 2023-12-08T04:24:19Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"region:us"
] | null | 2023-12-08T04:00:11Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# out
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 150
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1.dev0
- Transformers 4.36.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
FallingPlanet/EmoBERTv2-tiny
|
FallingPlanet
| 2023-12-08T04:20:59Z | 0 | 0 | null |
[
"text-classification",
"en",
"license:mit",
"region:us"
] |
text-classification
| 2023-12-08T00:23:33Z |
---
license: mit
language:
- en
pipeline_tag: text-classification
---
# EmoBERTv2 Model
This Model Card is a work in progress and will be completed in the future (dataset upload pending, etc)
## Model Description
EmoBERTv2 is a emotion text classification model trained on a large dataset of english social media posts. The model is fine-tuned
from "prajjwal1-bert-tiny" EmoBERTv2 can be used for either further fine-tuning, or for usage in real-time emotion prediction applications
## Datasets
This model was trained on the [Dataset Name] dataset, which is an aggregation of many datasets through relabling and data subsetting. The
dataset has 9 labels: joy, sad, love, anger, disgust, surprise, neutral, fear, and worry
## Training Procedure
EmoBERTv2 was fine-tuned from [Base Model Name] with specific hyperparameters [List Hyperparameters]. Training involved [X] epochs, using a learning rate of [Y].
## Intended Use
This model is intended for emotion classification in [specific domains or general use]. It should be used as a tool for [Specify Applications].
## Performance
EmoBERTv2 demonstrates an accuracy of 86.17% on the [Test Dataset Name]Test set. For detailed performance metrics, refer to [Link to Performance Metrics].
## Bias and Fairness
While efforts have been made to reduce bias, users should be aware of potential biases in the data. It is advisable to test the model in specific contexts.
## Licensing and Usage
EmoBERTv2 is released under the MIT License and can be freely used as outlined in the license.
## Other Model Variations
Additional variations of EmoBERTv2 include [List Variations]. These variations offer different trade-offs in terms of size, speed, and performance.
|
udemex10/Vojo_AI_LLMforCAD_v1.0
|
udemex10
| 2023-12-08T03:58:57Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"custom_code",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2023-12-07T23:50:47Z |
---
library_name: peft
base_model: tiiuae/falcon-7b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0
|
mjaydenkim/test_trainer
|
mjaydenkim
| 2023-12-08T03:53:26Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:tomh/toxigen_hatebert",
"base_model:finetune:tomh/toxigen_hatebert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T03:27:52Z |
---
base_model: tomh/toxigen_hatebert
tags:
- generated_from_trainer
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [tomh/toxigen_hatebert](https://huggingface.co/tomh/toxigen_hatebert) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0.post101
- Datasets 2.14.6
- Tokenizers 0.13.3
|
oostapeno/ng3_from_nt_1B_repl_none_none
|
oostapeno
| 2023-12-08T03:44:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-12-07T19:11:36Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| dbpedia_14_given_a_choice_of_categories__v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| squad_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| quoref_Find_Answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| cos_e_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| yelp_polarity_reviews_0_2_0_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| duorc_SelfRC_answer_question_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| adversarial_qa_dbidaf_generate_question_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| ai2_arc_ARC_Challenge_1_0_0_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| duorc_ParaphraseRC_answer_question_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
Last updated on: 2023-12-08 03:44:00+00:00
|
oostapeno/ng3_from_nt_1B_repl_lora_sim_none
|
oostapeno
| 2023-12-08T03:42:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-12-07T19:10:57Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| dbpedia_14_given_a_choice_of_categories__v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| squad_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| quoref_Find_Answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| cos_e_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| yelp_polarity_reviews_0_2_0_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| duorc_SelfRC_answer_question_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| adversarial_qa_dbidaf_generate_question_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| ai2_arc_ARC_Challenge_1_0_0_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| duorc_ParaphraseRC_answer_question_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
Last updated on: 2023-12-08 03:42:10+00:00
|
stockmark/bart-base-japanese-news
|
stockmark
| 2023-12-08T03:39:50Z | 81 | 10 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bart",
"text2text-generation",
"ja",
"japanese",
"lm",
"nlp",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-20T04:23:07Z |
---
language: ja
tags:
- ja
- japanese
- bart
- lm
- nlp
license: mit
---
# bart-base-japanese-news(base-sized model)
This repository provides a Japanese BART model. The model was trained by [Stockmark Inc.](https://stockmark.co.jp)
An introductory article on the model can be found at the following URL.
[https://tech.stockmark.co.jp/blog/bart-japanese-base-news/](https://tech.stockmark.co.jp/blog/bart-japanese-base-news/)
## Model description
BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset.
# How to use the model
*NOTE:* Since we are using a custom tokenizer, please use `trust_remote_code=True` to initialize the tokenizer.
## Simple use
```python
from transformers import AutoTokenizer, BartModel
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartModel.from_pretrained(model_name)
inputs = tokenizer("仿¥ã¯è¯ã倩æ°ã§ãã", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
## Sentence Permutation
```python
import torch
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartForConditionalGeneration.from_pretrained(model_name)
if torch.cuda.is_available():
model = model.to("cuda")
# correct order text is "ææ¥ã¯å€§éšã§ããé»è»ã¯æ¢ãŸãå¯èœæ§ããããŸããã§ããããèªå®
ããåããŸãã"
text = "é»è»ã¯æ¢ãŸãå¯èœæ§ããããŸããã§ããããèªå®
ããåããŸããææ¥ã¯å€§éšã§ãã"
inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True)
text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128)
output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
# sample output: ææ¥ã¯å€§éšã§ããé»è»ã¯æ¢ãŸãå¯èœæ§ããããŸããã§ããããèªå®
ããåããŸãã
```
## Mask filling
```python
import torch
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartForConditionalGeneration.from_pretrained(model_name)
if torch.cuda.is_available():
model = model.to("cuda")
text = "仿¥ã®å€©æ°ã¯<mask>ã®ãããåãå¿
èŠã§ãããã"
inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True)
text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128)
output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
# sample output: 仿¥ã®å€©æ°ã¯ãéšã®ãããåãå¿
èŠã§ãããã
```
## Text generation
*NOTE:* You can use the raw model for text generation. However, the model is mostly meant to be fine-tuned on a supervised dataset.
```python
import torch
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartForConditionalGeneration.from_pretrained(model_name)
if torch.cuda.is_available():
model = model.to("cuda")
text = "èªç¶èšèªåŠçïŒããããããããããç¥ç§°ïŒNLPïŒã¯ã人éãæ¥åžžçã«äœ¿ã£ãŠããèªç¶èšèªãã³ã³ãã¥ãŒã¿ã«åŠçãããäžé£ã®æè¡ã§ããã人工ç¥èœãšèšèªåŠã®äžåéã§ããããèšç®èšèªåŠãïŒcomputational linguisticsïŒãšã®é¡äŒŒãããããèªç¶èšèªåŠçã¯å·¥åŠçãªèŠç¹ããã®èšèªåŠçãããã®ã«å¯ŸããŠãèšç®èšèªåŠã¯èšèªåŠçèŠç¹ãéèŠããææ³ãããäºãå€ãã"
inputs = tokenizer([text], max_length=512, return_tensors="pt", truncation=True)
text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, min_length=0, max_length=40)
output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
# sample output: èªç¶èšèªåŠç(ããããããããããç¥ç§°:NLP)ã¯ã人éãæ¥åžžçã«äœ¿ã£ãŠããèªç¶èšèªãã³ã³ãã¥ãŒã¿ã«åŠçãããäžé£ã®æè¡ã§ãããèšèªåŠã®äžåéã§ããã
```
# Training
The model was trained on Japanese News Articles.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script.
# Licenses
The pretrained models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
*NOTE:* Only tokenization_bart_japanese_news.py is [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). Please see tokenization_bart_japanese_news.py for license details.
# Contact
If you have any questions, please contact us using [our contact form](https://stockmark.co.jp/contact).
# Acknowledgement
This comparison study supported with Cloud TPUs from Googleâs TensorFlow Research Cloud (TFRC).
|
brettbbb/mc_cot_32
|
brettbbb
| 2023-12-08T03:38:32Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2023-12-08T03:29:33Z |
---
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- generated_from_trainer
model-index:
- name: mc_cot_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mc_cot_32
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
oostapeno/rsgd3_full_1B_repl_finegrained_poly_router_dir_none
|
oostapeno
| 2023-12-08T03:14:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-12-05T05:56:03Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| wiki_hop_original_choose_best_object_interrogative_2_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| wiqa_effect_with_string_answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| quoref_Find_Answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| dream_baseline_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| duorc_SelfRC_answer_question_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| squad_v1_1_3_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| quartz_read_passage_below_choose_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| duorc_ParaphraseRC_title_generation_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| yelp_polarity_reviews_0_2_0_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| cos_e_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
Last updated on: 2023-12-08 03:14:42+00:00
|
gyr66/chinese-roberta-wwm-ext-large-lora-crf-ner
|
gyr66
| 2023-12-08T03:12:44Z | 0 | 0 | null |
[
"pytorch",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-12-07T14:32:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: chinese-roberta-wwm-ext-large-lora-crf-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-roberta-wwm-ext-large-lora-crf-ner
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7867
- Precision: 0.6482
- Recall: 0.7372
- F1: 0.6898
- Accuracy: 0.9347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 28
- eval_batch_size: 56
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7602 | 1.0 | 72 | 0.3759 | 0.4190 | 0.5808 | 0.4868 | 0.9133 |
| 0.3266 | 2.0 | 144 | 0.3221 | 0.5110 | 0.6772 | 0.5825 | 0.9262 |
| 0.263 | 3.0 | 216 | 0.3061 | 0.5373 | 0.6823 | 0.6012 | 0.9308 |
| 0.2355 | 4.0 | 288 | 0.3144 | 0.5385 | 0.6908 | 0.6052 | 0.9277 |
| 0.2042 | 5.0 | 360 | 0.3146 | 0.5690 | 0.7007 | 0.6280 | 0.9320 |
| 0.1856 | 6.0 | 432 | 0.3162 | 0.5676 | 0.6843 | 0.6205 | 0.9300 |
| 0.1644 | 7.0 | 504 | 0.3303 | 0.5810 | 0.7208 | 0.6434 | 0.9336 |
| 0.1536 | 8.0 | 576 | 0.3301 | 0.5851 | 0.7069 | 0.6403 | 0.9337 |
| 0.135 | 9.0 | 648 | 0.3565 | 0.6023 | 0.7072 | 0.6505 | 0.9335 |
| 0.1195 | 10.0 | 720 | 0.3676 | 0.5960 | 0.7276 | 0.6553 | 0.9333 |
| 0.1122 | 11.0 | 792 | 0.3723 | 0.5914 | 0.7256 | 0.6517 | 0.9320 |
| 0.0991 | 12.0 | 864 | 0.3771 | 0.6068 | 0.7115 | 0.6550 | 0.9351 |
| 0.0876 | 13.0 | 936 | 0.3982 | 0.6044 | 0.7132 | 0.6543 | 0.9327 |
| 0.0838 | 14.0 | 1008 | 0.4116 | 0.6081 | 0.7236 | 0.6608 | 0.9345 |
| 0.0786 | 15.0 | 1080 | 0.4065 | 0.6173 | 0.7268 | 0.6676 | 0.9344 |
| 0.0712 | 16.0 | 1152 | 0.4272 | 0.5976 | 0.7155 | 0.6512 | 0.9315 |
| 0.0725 | 17.0 | 1224 | 0.4340 | 0.5970 | 0.7324 | 0.6578 | 0.9308 |
| 0.0695 | 18.0 | 1296 | 0.4482 | 0.6177 | 0.7226 | 0.6660 | 0.9328 |
| 0.0639 | 19.0 | 1368 | 0.4574 | 0.6104 | 0.7251 | 0.6628 | 0.9310 |
| 0.0605 | 20.0 | 1440 | 0.4680 | 0.6105 | 0.7329 | 0.6661 | 0.9309 |
| 0.0556 | 21.0 | 1512 | 0.4534 | 0.6195 | 0.7316 | 0.6709 | 0.9347 |
| 0.049 | 22.0 | 1584 | 0.4726 | 0.6120 | 0.7195 | 0.6614 | 0.9320 |
| 0.0456 | 23.0 | 1656 | 0.4810 | 0.6283 | 0.7281 | 0.6745 | 0.9340 |
| 0.0407 | 24.0 | 1728 | 0.5079 | 0.6373 | 0.7258 | 0.6787 | 0.9332 |
| 0.045 | 25.0 | 1800 | 0.5099 | 0.6133 | 0.7278 | 0.6657 | 0.9322 |
| 0.0376 | 26.0 | 1872 | 0.5292 | 0.6173 | 0.7319 | 0.6697 | 0.9326 |
| 0.0375 | 27.0 | 1944 | 0.5393 | 0.6171 | 0.7248 | 0.6667 | 0.9324 |
| 0.0352 | 28.0 | 2016 | 0.5292 | 0.6091 | 0.7258 | 0.6624 | 0.9322 |
| 0.0339 | 29.0 | 2088 | 0.5431 | 0.6148 | 0.7135 | 0.6605 | 0.9320 |
| 0.0318 | 30.0 | 2160 | 0.5411 | 0.6273 | 0.7213 | 0.6710 | 0.9343 |
| 0.0298 | 31.0 | 2232 | 0.5580 | 0.6227 | 0.7372 | 0.6751 | 0.9316 |
| 0.0301 | 32.0 | 2304 | 0.5587 | 0.6248 | 0.7223 | 0.6700 | 0.9324 |
| 0.0293 | 33.0 | 2376 | 0.5660 | 0.6192 | 0.7213 | 0.6664 | 0.9323 |
| 0.0267 | 34.0 | 2448 | 0.5827 | 0.6202 | 0.7306 | 0.6709 | 0.9318 |
| 0.025 | 35.0 | 2520 | 0.5887 | 0.6241 | 0.7299 | 0.6729 | 0.9323 |
| 0.0239 | 36.0 | 2592 | 0.5861 | 0.6262 | 0.7301 | 0.6742 | 0.9316 |
| 0.0227 | 37.0 | 2664 | 0.6004 | 0.6341 | 0.7341 | 0.6804 | 0.9331 |
| 0.0212 | 38.0 | 2736 | 0.6207 | 0.6353 | 0.7251 | 0.6772 | 0.9331 |
| 0.0198 | 39.0 | 2808 | 0.6226 | 0.6374 | 0.7283 | 0.6798 | 0.9329 |
| 0.0224 | 40.0 | 2880 | 0.6197 | 0.6391 | 0.7299 | 0.6815 | 0.9329 |
| 0.0196 | 41.0 | 2952 | 0.6215 | 0.6438 | 0.7314 | 0.6848 | 0.9334 |
| 0.0221 | 42.0 | 3024 | 0.5998 | 0.6366 | 0.7223 | 0.6767 | 0.9332 |
| 0.0205 | 43.0 | 3096 | 0.6069 | 0.6300 | 0.7203 | 0.6721 | 0.9332 |
| 0.017 | 44.0 | 3168 | 0.6304 | 0.6399 | 0.7261 | 0.6803 | 0.9342 |
| 0.0171 | 45.0 | 3240 | 0.6519 | 0.6370 | 0.7258 | 0.6785 | 0.9327 |
| 0.0167 | 46.0 | 3312 | 0.6418 | 0.6298 | 0.7301 | 0.6762 | 0.9339 |
| 0.0175 | 47.0 | 3384 | 0.6495 | 0.6377 | 0.7304 | 0.6809 | 0.9326 |
| 0.0171 | 48.0 | 3456 | 0.6433 | 0.6399 | 0.7351 | 0.6842 | 0.9342 |
| 0.0146 | 49.0 | 3528 | 0.6498 | 0.6454 | 0.7223 | 0.6817 | 0.9340 |
| 0.0141 | 50.0 | 3600 | 0.6427 | 0.6421 | 0.7228 | 0.6801 | 0.9343 |
| 0.0131 | 51.0 | 3672 | 0.6530 | 0.6308 | 0.7346 | 0.6788 | 0.9327 |
| 0.0136 | 52.0 | 3744 | 0.6545 | 0.6251 | 0.7190 | 0.6688 | 0.9315 |
| 0.0134 | 53.0 | 3816 | 0.6686 | 0.6334 | 0.7273 | 0.6771 | 0.9324 |
| 0.0118 | 54.0 | 3888 | 0.6773 | 0.6353 | 0.7331 | 0.6807 | 0.9336 |
| 0.0108 | 55.0 | 3960 | 0.6751 | 0.6453 | 0.7329 | 0.6863 | 0.9334 |
| 0.0119 | 56.0 | 4032 | 0.6844 | 0.6416 | 0.7296 | 0.6828 | 0.9340 |
| 0.0109 | 57.0 | 4104 | 0.6733 | 0.6351 | 0.7301 | 0.6793 | 0.9341 |
| 0.0102 | 58.0 | 4176 | 0.6876 | 0.6445 | 0.7394 | 0.6887 | 0.9344 |
| 0.0115 | 59.0 | 4248 | 0.6928 | 0.6303 | 0.7321 | 0.6774 | 0.9320 |
| 0.0109 | 60.0 | 4320 | 0.6987 | 0.6300 | 0.7246 | 0.6740 | 0.9332 |
| 0.0099 | 61.0 | 4392 | 0.6952 | 0.6402 | 0.7346 | 0.6842 | 0.9342 |
| 0.0098 | 62.0 | 4464 | 0.7020 | 0.6462 | 0.7445 | 0.6919 | 0.9338 |
| 0.0091 | 63.0 | 4536 | 0.6969 | 0.6464 | 0.7369 | 0.6887 | 0.9342 |
| 0.0082 | 64.0 | 4608 | 0.7141 | 0.6537 | 0.7409 | 0.6946 | 0.9346 |
| 0.0082 | 65.0 | 4680 | 0.7011 | 0.6427 | 0.7306 | 0.6839 | 0.9333 |
| 0.0082 | 66.0 | 4752 | 0.7264 | 0.6494 | 0.7392 | 0.6914 | 0.9339 |
| 0.0075 | 67.0 | 4824 | 0.7010 | 0.6531 | 0.7334 | 0.6909 | 0.9345 |
| 0.0072 | 68.0 | 4896 | 0.7271 | 0.6401 | 0.7349 | 0.6842 | 0.9337 |
| 0.0075 | 69.0 | 4968 | 0.7262 | 0.6471 | 0.7414 | 0.6911 | 0.9336 |
| 0.0071 | 70.0 | 5040 | 0.7196 | 0.6474 | 0.7364 | 0.6890 | 0.9342 |
| 0.008 | 71.0 | 5112 | 0.7103 | 0.6446 | 0.7379 | 0.6881 | 0.9342 |
| 0.0066 | 72.0 | 5184 | 0.7365 | 0.6534 | 0.7417 | 0.6947 | 0.9349 |
| 0.0063 | 73.0 | 5256 | 0.7411 | 0.6444 | 0.7372 | 0.6876 | 0.9341 |
| 0.0064 | 74.0 | 5328 | 0.7270 | 0.6372 | 0.7394 | 0.6845 | 0.9339 |
| 0.0063 | 75.0 | 5400 | 0.7411 | 0.6458 | 0.7399 | 0.6897 | 0.9346 |
| 0.0055 | 76.0 | 5472 | 0.7303 | 0.6449 | 0.7384 | 0.6885 | 0.9344 |
| 0.0053 | 77.0 | 5544 | 0.7524 | 0.6471 | 0.7424 | 0.6915 | 0.9343 |
| 0.0055 | 78.0 | 5616 | 0.7514 | 0.6451 | 0.7397 | 0.6892 | 0.9346 |
| 0.0046 | 79.0 | 5688 | 0.7511 | 0.6504 | 0.7394 | 0.6920 | 0.9349 |
| 0.0046 | 80.0 | 5760 | 0.7644 | 0.6422 | 0.7432 | 0.6890 | 0.9342 |
| 0.0048 | 81.0 | 5832 | 0.7580 | 0.6486 | 0.7435 | 0.6928 | 0.9347 |
| 0.0051 | 82.0 | 5904 | 0.7442 | 0.6455 | 0.7359 | 0.6878 | 0.9344 |
| 0.0046 | 83.0 | 5976 | 0.7594 | 0.6382 | 0.7417 | 0.6861 | 0.9339 |
| 0.0045 | 84.0 | 6048 | 0.7577 | 0.6476 | 0.7389 | 0.6903 | 0.9347 |
| 0.0043 | 85.0 | 6120 | 0.7583 | 0.6515 | 0.7440 | 0.6946 | 0.9350 |
| 0.0041 | 86.0 | 6192 | 0.7596 | 0.6536 | 0.7382 | 0.6933 | 0.9351 |
| 0.0034 | 87.0 | 6264 | 0.7676 | 0.6555 | 0.7412 | 0.6957 | 0.9347 |
| 0.0039 | 88.0 | 6336 | 0.7645 | 0.6520 | 0.7442 | 0.6950 | 0.9352 |
| 0.0044 | 89.0 | 6408 | 0.7652 | 0.6516 | 0.7392 | 0.6926 | 0.9348 |
| 0.0042 | 90.0 | 6480 | 0.7667 | 0.6474 | 0.7379 | 0.6897 | 0.9347 |
| 0.003 | 91.0 | 6552 | 0.7715 | 0.6458 | 0.7387 | 0.6891 | 0.9352 |
| 0.0038 | 92.0 | 6624 | 0.7796 | 0.6462 | 0.7356 | 0.6880 | 0.9351 |
| 0.003 | 93.0 | 6696 | 0.7807 | 0.6546 | 0.7387 | 0.6941 | 0.9350 |
| 0.0028 | 94.0 | 6768 | 0.7829 | 0.6503 | 0.7364 | 0.6907 | 0.9349 |
| 0.0032 | 95.0 | 6840 | 0.7838 | 0.6482 | 0.7412 | 0.6916 | 0.9349 |
| 0.0029 | 96.0 | 6912 | 0.7865 | 0.6468 | 0.7409 | 0.6907 | 0.9349 |
| 0.003 | 97.0 | 6984 | 0.7867 | 0.6470 | 0.7402 | 0.6905 | 0.9350 |
| 0.0028 | 98.0 | 7056 | 0.7878 | 0.6465 | 0.7382 | 0.6893 | 0.9348 |
| 0.003 | 99.0 | 7128 | 0.7874 | 0.6487 | 0.7379 | 0.6905 | 0.9347 |
| 0.0028 | 100.0 | 7200 | 0.7867 | 0.6482 | 0.7372 | 0.6898 | 0.9347 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.2
|
Delcos/Starling-LM-11B-alpha
|
Delcos
| 2023-12-08T03:01:05Z | 1,463 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"starling",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-06T07:23:04Z |
---
license: cc-by-nc-nd-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- starling
- mistral
---
<h1 style="font-size: 36px;">Starling, now with more parameters.</h1>
This is the 11b version of Starling. More details soon.
<h1 style="font-size: 36px;">PENDING</h1>
## Description:
**PENDING**
**Uncensored**
## Main Goals:
## Prompt Template:
**For optimal interaction, use this template:**
# Recommended Settings:
# Benchmarks:
PENDING
# Training Data:
PENDING
|
Delcos/airoboros-mistral2.2-Starling-11b
|
Delcos
| 2023-12-08T02:55:53Z | 17 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"starling",
"llama-2",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T06:07:05Z |
---
license: cc-by-nc-nd-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- starling
- mistral
- llama-2
---
<h1 style="font-size: 36px;">PENDING</h1>
## Description:
**PENDING**
**Uncensored**
## Main Goals:
## Prompt Template:
**For optimal interaction, use this template:**
```
### Instruction:
### Response:
```
# Recommended Settings:
**Defaults:**
```
temperature: 0.77
top_p: 0.85
top_k: 20
repetition_penalty: 1.2
```
**Better context but a little more repetitive in some cases:**
```
temperature: 0.8
top_p: 0.85
top_k: 20
repetition_penalty: 1.2
guidance_scale: 1.25
```
# Benchmarks:
PENDING
# Training Data:
PENDING
|
arnabdhar/distilbert-base-amazon-multi
|
arnabdhar
| 2023-12-08T02:49:55Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"pytorch",
"amazon-rating",
"DistilBERTForSequenceClassification",
"generated_from_trainer",
"en",
"de",
"es",
"fr",
"ja",
"zh",
"dataset:mteb/amazon_reviews_multi",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-02T02:38:16Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- pytorch
- amazon-rating
- DistilBERTForSequenceClassification
- generated_from_trainer
metrics:
- accuracy
- matthews_correlation
model-index:
- name: distilbert-base-amazon-multi
results: []
datasets:
- mteb/amazon_reviews_multi
language:
- en
- de
- es
- fr
- ja
- zh
library_name: transformers
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-amazon-multi
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the mteb/amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9292
- Accuracy: 0.6055
- Matthews Correlation: 0.5072
## Training procedure
This model was fine tuned on Google Colab using a single **NVIDIA V100** GPU with 16GB of VRAM. It took around 13 hours to finish the finetuning of 10_000 steps.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 320
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 100000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Matthews Correlation |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:--------------------:|
| 1.0008 | 0.26 | 10000 | 1.0027 | 0.5616 | 0.4520 |
| 0.9545 | 0.51 | 20000 | 0.9705 | 0.5810 | 0.4788 |
| 0.9216 | 0.77 | 30000 | 0.9415 | 0.5883 | 0.4868 |
| 0.8765 | 1.03 | 40000 | 0.9495 | 0.5891 | 0.4871 |
| 0.8837 | 1.28 | 50000 | 0.9254 | 0.5992 | 0.4997 |
| 0.8753 | 1.54 | 60000 | 0.9199 | 0.6014 | 0.5029 |
| 0.8572 | 1.8 | 70000 | 0.9108 | 0.6090 | 0.5117 |
| 0.7851 | 2.05 | 80000 | 0.9276 | 0.6052 | 0.5066 |
| 0.7918 | 2.31 | 90000 | 0.9292 | 0.6055 | 0.5072 |
| 0.793 | 2.57 | 100000 | 0.9288 | 0.6064 | 0.5084 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
seetvn/my_awesome_model
|
seetvn
| 2023-12-08T02:45:56Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-05T21:59:18Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2751
- Accuracy: 0.8912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 150 | 0.3110 | 0.8788 |
| No log | 2.0 | 300 | 0.2751 | 0.8912 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hahahafofo/Qwen-1_8B-Stable-Diffusion-Prompt_v0.5
|
hahahafofo
| 2023-12-08T02:36:44Z | 6 | 4 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-12-06T06:24:50Z |
---
license: apache-2.0
---
æš¡å䜿çšåšçæçš³å®æ©æ£ïŒStable DiffusionïŒçæç€ºè¯ïŒpromptïŒãå®äž»èŠéè¿äžæçæçžåºçè±ææç€ºè¯ïŒå
åå©çšQwençèœåïŒæ¯æå
æ¬å€è¯è¯åšå
çå€ç§åœ¢åŒè¿è¡æç€ºè¯ççæãæ€æš¡ååºäº2,000æ¡æ°æ®è¿è¡ç¹å®äºä»»å¡ç埮è°ïŒSFTïŒïŒè®ç»è¿çšäžïŒç»åäº5䞪è®ç»åšæïŒEpochïŒïŒéçšäº1e-05çåŠä¹ çïŒLearning RateïŒïŒå¹¶æç»èŸŸå°äº0.19çæå€±åŒïŒLossïŒã
æš¡åç对åºä»£ç å·²ååžäºGitHubïŒæšå¯ä»¥éè¿ä»¥äžéŸæ¥è®¿é®å¹¶äœ¿çšSDXL-Turboè¿è¡å¿«éæµè¯çæçæç€ºè¯ïŒ
[https://github.com/zhongpei/Qwen-SDXL-Turbo.git](https://github.com/zhongpei/Qwen-SDXL-Turbo.git)
This model is designed to generate prompts for Stable Diffusion. It primarily translates Chinese into corresponding English prompts, leveraging the capabilities of Qwen, and supports a variety of formats including classical poetry for prompt generation. The model has been fine-tuned specifically for this task (SFT) using a dataset of 2,000 entries. During its training phase, it underwent 5 epochs with a learning rate of 1e-05 and achieved a loss value of 0.19.
The corresponding code for the model is available on GitHub. You can access and use SDXL-Turbo for rapid test generation of prompts through the following link:
[https://github.com/zhongpei/Qwen-SDXL-Turbo.git](https://github.com/zhongpei/Qwen-SDXL-Turbo.git)
**åæ°è¡šæ ŒïŒParameters TableïŒ:**
| åæ° | åŒ |
| -------------- | -------- |
| æ°æ®éïŒDataïŒ | 2,000æ¡ |
| è®ç»åšæïŒEpochsïŒ | 5 |
| åŠä¹ çïŒLearning RateïŒ | 1e-05 |
| æå€±åŒïŒLossïŒ | 0.19 |
|
mitchyAI/hyewonmchy
|
mitchyAI
| 2023-12-08T02:30:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-08T02:29:21Z |
---
license: creativeml-openrail-m
---
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_RandomError20percent_Seed101
|
behzadnet
| 2023-12-08T01:56:35Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-07T18:53:47Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
metamath/xlm-roberta-base-finetuned-panx-de
|
metamath
| 2023-12-08T01:54:04Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-08T01:33:31Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8653168012740302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1356
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2579 | 1.0 | 525 | 0.1522 | 0.8209 |
| 0.1261 | 2.0 | 1050 | 0.1360 | 0.8520 |
| 0.079 | 3.0 | 1575 | 0.1356 | 0.8653 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
alif-munim/llama-2-7b-guanaco
|
alif-munim
| 2023-12-08T01:51:29Z | 2 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-08T01:29:49Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
### Framework versions
- PEFT 0.6.3.dev0
|
jhliu/ClinicalNoteBERT-small
|
jhliu
| 2023-12-08T01:40:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"medical",
"clinical",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-12-08T01:20:03Z |
---
language: en
tags:
- medical
- clinical
---
# ClinicalNoteBERT
Using openly available clinical notes, we pretrain ClinicalNoteBERT, a series of encoders of three model sizes (110M, 67M, and 14.5M) that consider note contexts and variations during pretraining. We adopt a range of downstream applications to evaluate ClinicalNoteBERT, including tasks in fine-tuning, unsupervised semantic textual similarity, retrieval-augmented generation of LLMs, and unimodal and multimodal clinical predictions, and compare with strong baselines. Our models achieve better results than the baseline models of similar or larger sizes on various tasks and datasets. We find that different choices made during pretraining can lead to varied improvements for the downstream tasks. Our small and tiny versions of ClinicalNoteBERT maintain over 96% and 91% of the best performance with less than 61% and 14% of the parameters, respectively.
## Overall performance
| | # Params | FT | STS | RAG | CP | Fusion |
| -------------------------- | -------- | -------- | -------- | -------- | -------- | -------- |
| ClinicalNoteBERT-note-only | 110M | 80.0 | 78.9 | **14.0** | 63.8 | 66.5 |
| ClinicalNoteBERT-note-ntp | 110M | **80.6** | 73.6 | 13.0 | 62.9 | 65.8 |
| ClinicalNoteBERT-base | 110M | 80.1 | **79.8** | 12.3 | 64.0 | 66.7 |
| ClinicalNoteBERT-small | 67M | 78.1 | 77.1 | 11.4 | **64.6** | **66.8** |
| ClinicalNoteBERT-tiny | 14.5M | 74.1 | 75.7 | 8.9 | 62.4 | 65.5 |
FT: fine-tuning. STS: semantic textual similarity (ClinicalSTS). RAG: retrieval augmented generation (GPT2, Llama2). CP: clinical prediction. Fusion: multimodal fusion for clinical prediction.
When encoding text sequences for STS, RAG, and CP/Fusion, ClinicalNoteBERT models are adapted through extra SimCSE training in the unsupervised fashion using varied sequence lengths/types. *Sequence-sentence*, *sequence-segment*, and *sequence-note* are used for STS, RAG, and CP/Fusion, respectively, according to their corresponding lengths. More details can be found in the paper.
## Citation
Under review
|
thingthatis/sd-turbo
|
thingthatis
| 2023-12-08T01:36:47Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-08T01:36:46Z |
---
pipeline_tag: text-to-image
inference: false
---
# SD-Turbo Model Card
<!-- Provide a quick summary of what the model is/does. -->

SD-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation.
We release SD-Turbo as a research artifact, and to study small, distilled text-to-image models. For increased quality and prompt understanding,
we recommend [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo/).
## Model Details
### Model Description
SD-Turbo is a distilled version of [Stable Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1), trained for real-time synthesis.
SD-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the [technical report](https://stability.ai/research/adversarial-diffusion-distillation)), which allows sampling large-scale foundational
image diffusion models in 1 to 4 steps at high image quality.
This approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an
adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps.
- **Developed by:** Stability AI
- **Funded by:** Stability AI
- **Model type:** Generative text-to-image model
- **Finetuned from model:** [Stable Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models),
which implements the most popular diffusion frameworks (both training and inference).
- **Repository:** https://github.com/Stability-AI/generative-models
- **Paper:** https://stability.ai/research/adversarial-diffusion-distillation
- **Demo [for the bigger SDXL-Turbo]:** http://clipdrop.co/stable-diffusion-turbo
## Evaluation


The charts above evaluate user preference for SD-Turbo over other single- and multi-step models.
SD-Turbo evaluated at a single step is preferred by human voters in terms of image quality and prompt following over LCM-Lora XL and LCM-Lora 1.5.
**Note:** For increased quality, we recommend the bigger version [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo/).
For details on the user study, we refer to the [research paper](https://stability.ai/research/adversarial-diffusion-distillation).
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Research on generative models.
- Research on real-time applications of generative models.
- Research on the impact of real-time generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
Excluded uses are described below.
### Diffusers
```
pip install diffusers transformers accelerate --upgrade
```
- **Text-to-image**:
SD-Turbo does not make use of `guidance_scale` or `negative_prompt`, we disable it with `guidance_scale=0.0`.
Preferably, the model generates images of size 512x512 but higher image sizes work as well.
A **single step** is enough to generate high quality images.
```py
from diffusers import AutoPipelineForText2Image
import torch
pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/sd-turbo", torch_dtype=torch.float16, variant="fp16")
pipe.to("cuda")
prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe."
image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0]
```
- **Image-to-image**:
When using SD-Turbo for image-to-image generation, make sure that `num_inference_steps` * `strength` is larger or equal
to 1. The image-to-image pipeline will run for `int(num_inference_steps * strength)` steps, *e.g.* 0.5 * 2.0 = 1 step in our example
below.
```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image
import torch
pipe = AutoPipelineForImage2Image.from_pretrained("stabilityai/sd-turbo", torch_dtype=torch.float16, variant="fp16")
pipe.to("cuda")
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png").resize((512, 512))
prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"
image = pipe(prompt, image=init_image, num_inference_steps=2, strength=0.5, guidance_scale=0.0).images[0]
```
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events,
and therefore using the model to generate such content is out-of-scope for the abilities of this model.
The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy).
## Limitations and Bias
### Limitations
- The quality and prompt alignment is lower than that of [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo/).
- The generated images are of a fixed resolution (512x512 pix), and the model does not achieve perfect photorealism.
- The model cannot render legible text.
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Recommendations
The model is intended for research purposes only.
## How to Get Started with the Model
Check out https://github.com/Stability-AI/generative-models
|
AI-bo/gptnum.py
|
AI-bo
| 2023-12-08T01:27:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-06T23:32:14Z |
# GPTnum.py
---
GPT 2 implementation from scratch with numpy as the only dependency
**Heavily inspired by [picoGPT](https://github.com/jaymody/picoGPT) and [llama2.c](https://github.com/karpathy/llama2.c)**
---
**Dependencies**
```bash
pip install numpy
```
**Run guide**
Clone the repository from [huggingface](https://huggingface.co/AI-bo/gptnum.py)
```bash
git clone https://huggingface.co/AI-bo/gptnum.py
cd gptnum.py
python GPTnum.py
```
|
alialhousseini/ppo-LunarLander-v2
|
alialhousseini
| 2023-12-08T01:14:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T00:44:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.76 +/- 21.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yesj1234/zhko_xlsr_100p_run2
|
yesj1234
| 2023-12-08T00:55:35Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"./train_dataset.py",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-08T00:53:15Z |
---
base_model: zhko_xlsr_100p_run1
tags:
- automatic-speech-recognition
- ./train_dataset.py
- generated_from_trainer
model-index:
- name: zhko_xlsr_100p_run1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zhko_xlsr_100p_run1
This model is a fine-tuned version of [zhko_xlsr_100p_run1](https://huggingface.co/zhko_xlsr_100p_run1) on the ./TRAIN_DATASET.PY - NA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.028652816486579e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Nurmukhamed/bert-finetuned-ner
|
Nurmukhamed
| 2023-12-08T00:51:11Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-08T00:40:42Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9323655559221379
- name: Recall
type: recall
value: 0.9511948838774823
- name: F1
type: f1
value: 0.9416861046317894
- name: Accuracy
type: accuracy
value: 0.9865779713898863
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Precision: 0.9324
- Recall: 0.9512
- F1: 0.9417
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0761 | 1.0 | 1756 | 0.0697 | 0.9071 | 0.9337 | 0.9202 | 0.9814 |
| 0.0354 | 2.0 | 3512 | 0.0677 | 0.9333 | 0.9465 | 0.9398 | 0.9852 |
| 0.0246 | 3.0 | 5268 | 0.0628 | 0.9324 | 0.9512 | 0.9417 | 0.9866 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
shungyan/product_chatbot
|
shungyan
| 2023-12-08T00:37:07Z | 14 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-07T23:48:56Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: shungyan/product_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# shungyan/product_chatbot
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7852
- Validation Loss: 1.8731
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.9100 | 3.6066 | 0 |
| 3.1064 | 2.7250 | 1 |
| 2.5975 | 2.5330 | 2 |
| 2.4268 | 2.4540 | 3 |
| 2.3321 | 2.2441 | 4 |
| 2.2069 | 2.2428 | 5 |
| 2.1409 | 2.1035 | 6 |
| 1.9815 | 2.0602 | 7 |
| 1.9231 | 1.9187 | 8 |
| 1.7852 | 1.8731 | 9 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Kekega/flan-t5-base-samsum
|
Kekega
| 2023-12-08T00:35:47Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-08T00:33:14Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-base-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.39
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3707
- Rouge1: 47.39
- Rouge2: 23.8837
- Rougel: 40.08
- Rougelsum: 43.7241
- Gen Len: 17.2137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4525 | 1.0 | 1842 | 1.3837 | 46.4021 | 22.8734 | 39.1025 | 42.8284 | 17.2149 |
| 1.3436 | 2.0 | 3684 | 1.3725 | 47.0983 | 23.5269 | 39.8757 | 43.4526 | 17.1954 |
| 1.2821 | 3.0 | 5526 | 1.3708 | 47.2332 | 23.6343 | 39.7749 | 43.4436 | 17.2271 |
| 1.2307 | 4.0 | 7368 | 1.3707 | 47.39 | 23.8837 | 40.08 | 43.7241 | 17.2137 |
| 1.1986 | 5.0 | 9210 | 1.3762 | 47.4841 | 23.9306 | 40.0741 | 43.7225 | 17.2821 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
osunlp/TableLlama
|
osunlp
| 2023-12-07T23:51:11Z | 906 | 28 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:osunlp/TableInstruct",
"arxiv:2311.09206",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-20T01:50:22Z |
---
license: cc-by-4.0
language:
- en
datasets:
- osunlp/TableInstruct
---
---
# TableLlama: Towards Open Large Generalist Models for Tables
Project Page: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
Paper: [https://arxiv.org/abs/2311.09206](https://arxiv.org/abs/2311.09206)
Dataset: [https://huggingface.co/datasets/osunlp/TableInstruct/](https://huggingface.co/datasets/osunlp/TableInstruct/)
Code: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
## Introduction
We introduce TableLlama, an open-source large generalist model specifically tailored for various table-based tasks. The TableLlama model is trained on ð€ [TableInstruct Dataset](https://huggingface.co/datasets/osunlp/TableInstruct), a meticulously curated instruction tuning dataset for tables. TableLlama is tuned on 2.6 million table-based task data, and can handle up to 8K context!
## Model
[TableLlama-7B](https://huggingface.co/osunlp/TableLlama/)
## Data
The models are trained on the ð€ [TableInstruct Dataset](https://huggingface.co/datasets/osunlp/TableInstruct), which includes a comprehensive table-based instruction tuning dataset that covers a variety of real-world tables and realistic tasks. We include 14 datasets of 11 tasks in total. Check out the dataset card for more details.
## Training Procedure
The models are fine-tuned with the TableInstruct dataset using LongLoRA (7B), fully fine-tuning version as the base model, which replaces the vanilla attention mechanism of the original Llama-2 (7B) with shift short attention. The training takes 9 days on a 48 80*A100 cluster. Check out our paper for more details.
## Evaluation
The models are evaluated on 8 in-domain datasets of 8 tasks and 6 out-of-domain datasets of 4 tasks.
## Usage
You can use the models through Huggingface's Transformers library.
Check our Github repo for more advanced use: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
## Prompt Format
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that
appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Question:
{question}
### Response:
```
## Limitations
We've tried our best to build table generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the table tasks and datasets. Still not all table-based tasks can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@misc{zhang2023tablellama,
title={TableLlama: Towards Open Large Generalist Models for Tables},
author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
year={2023},
eprint={2311.09206},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
wnic00/hihu
|
wnic00
| 2023-12-07T23:29:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:citizenlab/twitter-xlm-roberta-base-sentiment-finetunned",
"base_model:finetune:citizenlab/twitter-xlm-roberta-base-sentiment-finetunned",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-07T06:06:07Z |
---
base_model: citizenlab/twitter-xlm-roberta-base-sentiment-finetunned
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hihu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hihu
This model is a fine-tuned version of [citizenlab/twitter-xlm-roberta-base-sentiment-finetunned](https://huggingface.co/citizenlab/twitter-xlm-roberta-base-sentiment-finetunned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6725
- Accuracy: 0.7227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9752 | 1.0 | 824 | 0.6724 | 0.7109 |
| 0.6377 | 2.0 | 1648 | 0.6598 | 0.7238 |
| 0.5905 | 3.0 | 2472 | 0.6725 | 0.7227 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.0
|
Tatvajsh/dpo_AHS_OPS_WPCS_v6.0_Latest_Try_dec_06
|
Tatvajsh
| 2023-12-07T23:28:53Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"region:us"
] | null | 2023-12-07T00:15:38Z |
---
library_name: peft
base_model: openlm-research/open_llama_3b_v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0
|
lostargon/bloomz-1b1_PROMPT_TUNING_CAUSAL_LM_SQL
|
lostargon
| 2023-12-07T23:27:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloomz-1b1",
"base_model:adapter:bigscience/bloomz-1b1",
"region:us"
] | null | 2023-12-07T23:27:35Z |
---
library_name: peft
base_model: bigscience/bloomz-1b1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0
|
Venkatesh4342/t5-base-en-to-hi
|
Venkatesh4342
| 2023-12-07T23:27:05Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-03T15:42:28Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-base-en-to-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-en-to-hi
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2581
- Bleu: 0.0425
- Gen Len: 17.7806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.2411 | 1.0 | 51639 | 0.2581 | 0.0425 | 17.7806 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
TheBloke/dolphin-2.2-yi-34b-200k-GGUF
|
TheBloke
| 2023-12-07T23:22:05Z | 229 | 14 |
transformers
|
[
"transformers",
"gguf",
"yi",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/samantha-data",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"base_model:cognitivecomputations/dolphin-2.2-yi-34b-200k",
"base_model:quantized:cognitivecomputations/dolphin-2.2-yi-34b-200k",
"license:other",
"region:us"
] | null | 2023-12-07T23:01:31Z |
---
base_model: ehartford/dolphin-2.2-yi-34b-200k
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/samantha-data
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
inference: false
language:
- en
license: other
license_link: LICENSE
license_name: yi-license
model_creator: Eric Hartford
model_name: Dolphin 2.2 Yi 34B 200K
model_type: yi
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dolphin 2.2 Yi 34B 200K - GGUF
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Dolphin 2.2 Yi 34B 200K](https://huggingface.co/ehartford/dolphin-2.2-yi-34b-200k)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Eric Hartford's Dolphin 2.2 Yi 34B 200K](https://huggingface.co/ehartford/dolphin-2.2-yi-34b-200k).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-2.2-yi-34b-200k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [dolphin-2.2-yi-34b-200k.Q2_K.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes |
| [dolphin-2.2-yi-34b-200k.Q3_K_S.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [dolphin-2.2-yi-34b-200k.Q3_K_M.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss |
| [dolphin-2.2-yi-34b-200k.Q3_K_L.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [dolphin-2.2-yi-34b-200k.Q4_0.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dolphin-2.2-yi-34b-200k.Q4_K_S.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss |
| [dolphin-2.2-yi-34b-200k.Q4_K_M.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [dolphin-2.2-yi-34b-200k.Q5_0.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dolphin-2.2-yi-34b-200k.Q5_K_S.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [dolphin-2.2-yi-34b-200k.Q5_K_M.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [dolphin-2.2-yi-34b-200k.Q6_K.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss |
| [dolphin-2.2-yi-34b-200k.Q8_0.gguf](https://huggingface.co/TheBloke/dolphin-2.2-yi-34b-200k-GGUF/blob/main/dolphin-2.2-yi-34b-200k.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/dolphin-2.2-yi-34b-200k-GGUF and below it, a specific filename to download, such as: dolphin-2.2-yi-34b-200k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/dolphin-2.2-yi-34b-200k-GGUF dolphin-2.2-yi-34b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/dolphin-2.2-yi-34b-200k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/dolphin-2.2-yi-34b-200k-GGUF dolphin-2.2-yi-34b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m dolphin-2.2-yi-34b-200k.Q4_K_M.gguf --color -c 200000 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 200000` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 â Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./dolphin-2.2-yi-34b-200k.Q4_K_M.gguf", # Download the model file first
n_ctx=200000, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./dolphin-2.2-yi-34b-200k.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, é¿æ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik BjÀreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Eric Hartford's Dolphin 2.2 Yi 34B 200K
Dolphin 2.2 ð¬
https://erichartford.com/dolphin
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.2-Yi-34b-200k's training was sponsored by [convai](https://www.convai.com/).
This model is based on Yi, and is subject to Yi license.
The base model has 200k context, I finetuned it with 16k.
Note: No longer need trust_remote_code! Thank you Yi team!
New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation.
## Training
It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|>
<|im_start|>user
Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of Convai.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
Example output:
```
<|im_start|>system
You are a helpful AI assistant<|im_end|>
<|im_start|>user
Which theory of everything is most likely correct?<|im_end|>
<|im_start|>assistant
The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention:
1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims.
2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles.
Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe.
```
[If you would like to financially support my efforts](https://ko-fi.com/erichartford)
[I also have some swag you can buy](https://fa7113.myshopify.com/)
<!-- original-model-card end -->
|
Skylarrlol/Hia
|
Skylarrlol
| 2023-12-07T23:14:48Z | 0 | 0 | null |
[
"music",
"ko",
"en",
"fr",
"it",
"sw",
"license:openrail",
"region:us"
] | null | 2023-12-07T22:03:40Z |
---
license: openrail
language:
- ko
- en
- fr
- it
- sw
tags:
- music
---
|
Weyaxi/MetaMath-una-cybertron-v2-bf16-Ties
|
Weyaxi
| 2023-12-07T23:08:14Z | 1,547 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-06T10:02:33Z |
---
license: apache-2.0
---
Merge of [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) and [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) using ties merge.
### *Weights*
- [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B): 0.5
- [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16): 0.3
### *Density*
- [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B): 0.5
- [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16): 0.5
|
nathan-marquez/distilbert-base-cased-DAPT-religious_political
|
nathan-marquez
| 2023-12-07T22:51:57Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-05T00:22:12Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-cased-DAPT-religious_political
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-DAPT-religious_political
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 145 | 2.4549 |
| No log | 2.0 | 290 | 2.4827 |
| No log | 3.0 | 435 | 2.3605 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
pokjay/q-Taxi-v3
|
pokjay
| 2023-12-07T22:43:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-07T21:52:59Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pokjay/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vsamuel/ten_epoch
|
vsamuel
| 2023-12-07T22:13:52Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-07T14:59:47Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: ten_epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ten_epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5955
- Rouge1: 0.3082
- Rouge2: 0.1683
- Rougel: 0.2764
- Rougelsum: 0.2753
- Gen Len: 17.35
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 107 | 3.2228 | 0.1126 | 0.0184 | 0.0993 | 0.0988 | 18.45 |
| No log | 2.0 | 214 | 2.9077 | 0.1374 | 0.0353 | 0.1209 | 0.1209 | 17.98 |
| No log | 3.0 | 321 | 2.7883 | 0.2181 | 0.0901 | 0.1951 | 0.1944 | 17.38 |
| No log | 4.0 | 428 | 2.7240 | 0.2353 | 0.1079 | 0.2106 | 0.2097 | 17.25 |
| 3.0648 | 5.0 | 535 | 2.6765 | 0.2634 | 0.1289 | 0.2375 | 0.2373 | 17.41 |
| 3.0648 | 6.0 | 642 | 2.6428 | 0.2724 | 0.1343 | 0.2482 | 0.2475 | 17.45 |
| 3.0648 | 7.0 | 749 | 2.6210 | 0.2886 | 0.1473 | 0.2559 | 0.2552 | 17.65 |
| 3.0648 | 8.0 | 856 | 2.6058 | 0.3013 | 0.1627 | 0.2701 | 0.269 | 17.43 |
| 3.0648 | 9.0 | 963 | 2.5978 | 0.3053 | 0.1667 | 0.2737 | 0.2722 | 17.45 |
| 2.6022 | 10.0 | 1070 | 2.5955 | 0.3082 | 0.1683 | 0.2764 | 0.2753 | 17.35 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
vsamuel/seven_epoch
|
vsamuel
| 2023-12-07T22:09:07Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-07T22:05:17Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: seven_epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# seven_epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6967
- Rouge1: 0.2471
- Rouge2: 0.1156
- Rougel: 0.221
- Rougelsum: 0.2214
- Gen Len: 17.57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 107 | 3.2445 | 0.1098 | 0.0185 | 0.0968 | 0.0967 | 18.47 |
| No log | 2.0 | 214 | 2.9257 | 0.1353 | 0.0337 | 0.119 | 0.1191 | 17.94 |
| No log | 3.0 | 321 | 2.8182 | 0.2021 | 0.0797 | 0.1801 | 0.1795 | 17.45 |
| No log | 4.0 | 428 | 2.7577 | 0.2358 | 0.1081 | 0.2096 | 0.2089 | 17.53 |
| 3.0898 | 5.0 | 535 | 2.7202 | 0.2347 | 0.1059 | 0.211 | 0.2094 | 17.31 |
| 3.0898 | 6.0 | 642 | 2.7035 | 0.2429 | 0.1124 | 0.2191 | 0.2188 | 17.58 |
| 3.0898 | 7.0 | 749 | 2.6967 | 0.2471 | 0.1156 | 0.221 | 0.2214 | 17.57 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
vsamuel/five_epoch
|
vsamuel
| 2023-12-07T22:04:36Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-07T21:55:54Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: five_epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# five_epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7970
- Rouge1: 0.2098
- Rouge2: 0.0823
- Rougel: 0.1904
- Rougelsum: 0.1905
- Gen Len: 17.47
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 107 | 3.2569 | 0.1107 | 0.018 | 0.0981 | 0.0978 | 18.5 |
| No log | 2.0 | 214 | 2.9546 | 0.1386 | 0.0334 | 0.1187 | 0.1186 | 18.09 |
| No log | 3.0 | 321 | 2.8548 | 0.1721 | 0.0587 | 0.1544 | 0.1544 | 17.76 |
| No log | 4.0 | 428 | 2.8110 | 0.204 | 0.0774 | 0.1862 | 0.1871 | 17.64 |
| 3.1308 | 5.0 | 535 | 2.7970 | 0.2098 | 0.0823 | 0.1904 | 0.1905 | 17.47 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Pranavsenthilvel/classification-4-kfold-V1
|
Pranavsenthilvel
| 2023-12-07T21:57:30Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-07T19:48:02Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: classification-4-kfold-V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification-4-kfold-V1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 272 | 0.0105 | 0.9986 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
vsamuel/two_epoch
|
vsamuel
| 2023-12-07T21:54:26Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-07T21:28:21Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: two_epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# two_epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2015
- Rouge1: 0.107
- Rouge2: 0.0169
- Rougel: 0.0947
- Rougelsum: 0.0946
- Gen Len: 18.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 107 | 3.3575 | 0.1128 | 0.0191 | 0.1006 | 0.1009 | 18.56 |
| No log | 2.0 | 214 | 3.2015 | 0.107 | 0.0169 | 0.0947 | 0.0946 | 18.5 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hts98/bertopic_ver2.0
|
hts98
| 2023-12-07T21:26:44Z | 9 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-12-07T21:26:41Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic_ver2.0
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("hts98/bertopic_ver2.0")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 207
* Number of training documents: 529579
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | ok - ms - wifi - swimming - balcony | 150 | -1_ok_ms_wifi_swimming |
| 0 | hanoi - quarter - halong - sapa - lake | 217212 | 0_hanoi_quarter_halong_sapa |
| 1 | vietnamese - vietnam - viet - western - weeks | 47397 | 1_vietnamese_vietnam_viet_western |
| 2 | hoi - homestay - ancient - bikes - bicycles | 46092 | 2_hoi_homestay_ancient_bikes |
| 3 | told - smell - said - dirty - com | 16583 | 3_told_smell_said_dirty |
| 4 | hostel - dorm - dorms - hostels - lockers | 12960 | 4_hostel_dorm_dorms_hostels |
| 5 | saigon - square - ben - district - thanh | 12475 | 5_saigon_square_ben_district |
| 6 | bikes - bike - bicycles - rent - scooter | 8246 | 6_bikes_bike_bicycles_rent |
| 7 | minh - ho - chi - district - city | 6633 | 7_minh_ho_chi_district |
| 8 | hue - citadel - perfume - imperial - tombs | 6057 | 8_hue_citadel_perfume_imperial |
| 9 | noise - loud - noisy - music - hear | 4572 | 9_noise_loud_noisy_music |
| 10 | villa - villas - bedroom - private - butler | 4060 | 10_villa_villas_bedroom_private |
| 11 | sea - ocean - beach - seaview - khe | 4032 | 11_sea_ocean_beach_seaview |
| 12 | nha - trang - nhi - novotel - nhatrang | 3936 | 12_nha_trang_nhi_novotel |
| 13 | hcmc - district - ben - rooftop - attractions | 3890 | 13_hcmc_district_ben_rooftop |
| 14 | massage - spa - therapist - massages - body | 3680 | 14_massage_spa_therapist_massages |
| 15 | swimming - pool - roof - rooftop - swim | 3645 | 15_swimming_pool_roof_rooftop |
| 16 | resort - resorts - pools - grounds - relaxing | 3495 | 16_resort_resorts_pools_grounds |
| 17 | quarter - quarters - old - heart - middle | 3454 | 17_quarter_quarters_old_heart |
| 18 | nang - da - khe - han - fivitel | 3311 | 18_nang_da_khe_han |
| 19 | staffs - polite - helped - desk - kind | 2991 | 19_staffs_polite_helped_desk |
| 20 | hostel - hostels - backpackers - dorm - tomodachi | 2813 | 20_hostel_hostels_backpackers_dorm |
| 21 | mekong - delta - floating - lodge - tho | 2779 | 21_mekong_delta_floating_lodge |
| 22 | hostel - hostels - dalat - dorm - dorms | 2694 | 22_hostel_hostels_dalat_dorm |
| 23 | center - shopping - centre - city - attractions | 2650 | 23_center_shopping_centre_city |
| 24 | longer - nights - stayed - enjoyed - total | 2465 | 24_longer_nights_stayed_enjoyed |
| 25 | halong - cruise - bay - cruises - overnight | 2380 | 25_halong_cruise_bay_cruises |
| 26 | danang - sanouva - han - dragon - khe | 2346 | 26_danang_sanouva_han_dragon |
| 27 | quoc - phu - island - regent - bauhinia | 2314 | 27_quoc_phu_island_regent |
| 28 | asia - chinese - se - hong - asian | 2298 | 28_asia_chinese_se_hong |
| 29 | ben - thanh - halal - muslim - market | 2268 | 29_ben_thanh_halal_muslim |
| 30 | lake - west - quarter - cathedral - sword | 2114 | 30_lake_west_quarter_cathedral |
| 31 | nights - 2015 - 2018 - stayed - november | 2068 | 31_nights_2015_2018_stayed |
| 32 | kiem - hoan - lake - hoam - joseph | 2036 | 32_kiem_hoan_lake_hoam |
| 33 | swimming - pools - pool - swim - kids | 2017 | 33_swimming_pools_pool_swim |
| 34 | siesta - la - trendy - bean - premium | 1958 | 34_siesta_la_trendy_bean |
| 35 | river - riverview - overlooking - views - boat | 1841 | 35_river_riverview_overlooking_views |
| 36 | star - stars - rating - 5star - rated | 1829 | 36_star_stars_rating_5star |
| 37 | phong - nha - farmstay - cave - caves | 1760 | 37_phong_nha_farmstay_cave |
| 38 | lounge - club - executive - platinum - accor | 1671 | 38_lounge_club_executive_platinum |
| 39 | ninh - binh - coc - mua - vancouver | 1657 | 39_ninh_binh_coc_mua |
| 40 | birthday - cake - honeymoon - wedding - anniversary | 1581 | 40_birthday_cake_honeymoon_wedding |
| 41 | market - convenient - shopping - markets - center | 1562 | 41_market_convenient_shopping_markets |
| 42 | mui - ne - hills - muine - thiet | 1392 | 42_mui_ne_hills_muine |
| 43 | sapa - train - halong - bay - trips | 1389 | 43_sapa_train_halong_bay |
| 44 | hcm - district - ben - hcmh - thanh | 1376 | 44_hcm_district_ben_hcmh |
| 45 | district - d1 - attractions - rooftop - heart | 1334 | 45_district_d1_attractions_rooftop |
| 46 | lodge - bamboo - tien - cottages - national | 1314 | 46_lodge_bamboo_tien_cottages |
| 47 | villa - villas - hoi - cherry - tnt | 1232 | 47_villa_villas_hoi_cherry |
| 48 | speak - english - speaks - spoke - language | 1151 | 48_speak_english_speaks_spoke |
| 49 | ha - bay - long - cruise - boat | 1143 | 49_ha_bay_long_cruise |
| 50 | hue - citadel - serene - imperial - perfume | 1130 | 50_hue_citadel_serene_imperial |
| 51 | cat - ba - catba - island - woodstock | 1121 | 51_cat_ba_catba_island |
| 52 | japanese - yen - japan - bridge - tokyo | 1099 | 52_japanese_yen_japan_bridge |
| 53 | siesta - la - trendy - premium - bean | 1038 | 53_siesta_la_trendy_premium |
| 54 | airport - transit - taxi - flight - uber | 1020 | 54_airport_transit_taxi_flight |
| 55 | tau - vung - pullman - malibu - mercure | 1010 | 55_tau_vung_pullman_malibu |
| 56 | loop - giang - bong - guides - ha | 979 | 56_loop_giang_bong_guides |
| 57 | dalat - edensee - valley - golf - center | 927 | 57_dalat_edensee_valley_golf |
| 58 | anantara - ne - mui - noel - anantaras | 857 | 58_anantara_ne_mui_noel |
| 59 | naman - retreat - ambassador - hay - babylon | 841 | 59_naman_retreat_ambassador_hay |
| 60 | beach - 40km - sea - chill - sand | 769 | 60_beach_40km_sea_chill |
| 61 | vinpearl - vin - safari - pearl - discovery | 749 | 61_vinpearl_vin_safari_pearl |
| 62 | vegan - vegetarian - vegetarians - seafood - chicken | 715 | 62_vegan_vegetarian_vegetarians_seafood |
| 63 | salinda - quoc - phu - jeevan - indian | 702 | 63_salinda_quoc_phu_jeevan |
| 64 | dao - senses - condor - poulo - van | 695 | 64_dao_senses_condor_poulo |
| 65 | french - colonial - france - paris - style | 689 | 65_french_colonial_france_paris |
| 66 | thank - ms - appreciation - smile - express | 688 | 66_thank_ms_appreciation_smile |
| 67 | dragon - han - bridge - rising - river | 686 | 67_dragon_han_bridge_rising |
| 68 | james - basic - helpfull - slave - supporting | 683 | 68_james_basic_helpfull_slave |
| 69 | noi - ha - graceful - hoan - kiem | 657 | 69_noi_ha_graceful_hoan |
| 70 | host - hosts - hostal - hostess - family | 629 | 70_host_hosts_hostal_hostess |
| 71 | metropole - sofitel - wing - legend - history | 623 | 71_metropole_sofitel_wing_legend |
| 72 | coc - tam - mua - caves - ecolodge | 617 | 72_coc_tam_mua_caves |
| 73 | opera - wing - house - hilton - history | 599 | 73_opera_wing_house_hilton |
| 74 | citadel - tombs - perfume - imperial - pagoda | 581 | 74_citadel_tombs_perfume_imperial |
| 75 | residencia - la - residence - dora - casa | 558 | 75_residencia_la_residence_dora |
| 76 | pros - cons - pro - poor - properly | 540 | 76_pros_cons_pro_poor |
| 77 | fusion - fusionista - treatments - quoc - phu | 526 | 77_fusion_fusionista_treatments_quoc |
| 78 | mia - sandals - cliff - baia - cindy | 502 | 78_mia_sandals_cliff_baia |
| 79 | kiman - kim - mama - kims - ty | 490 | 79_kiman_kim_mama_kims |
| 80 | ming - ho - min - chi - mihn | 487 | 80_ming_ho_min_chi |
| 81 | oriental - suites - mary - central - brian | 483 | 81_oriental_suites_mary_central |
| 82 | rex - follies - governor - history - journalists | 475 | 82_rex_follies_governor_history |
| 83 | veranda - la - quoc - phu - colonial | 467 | 83_veranda_la_quoc_phu |
| 84 | island - bungalows - bungalow - beaches - snorkeling | 458 | 84_island_bungalows_bungalow_beaches |
| 85 | mango - eco - plantation - bay - bungalow | 450 | 85_mango_eco_plantation_bay |
| 86 | lat - da - dreams - axel - amore | 442 | 86_lat_da_dreams_axel |
| 87 | ancient - town - tai - shuttle - bicycle | 442 | 87_ancient_town_tai_shuttle |
| 88 | nha - trang - russian - anam - regalia | 436 | 88_nha_trang_russian_anam |
| 89 | chau - mai - valley - ecolodge - villages | 435 | 89_chau_mai_valley_ecolodge |
| 90 | hyatt - park - saigon - opera - square | 424 | 90_hyatt_park_saigon_opera |
| 91 | serene - shining - palace - hue - premier | 421 | 91_serene_shining_palace_hue |
| 92 | dong - geminai - khoi - myst - caves | 410 | 92_dong_geminai_khoi_myst |
| 93 | hcmc - district - anpha - chahn - vietnam | 406 | 93_hcmc_district_anpha_chahn |
| 94 | jw - marriott - executive - lounge - grill | 403 | 94_jw_marriott_executive_lounge |
| 95 | silverland - jolie - sakyo - yen - afternoon | 403 | 95_silverland_jolie_sakyo_yen |
| 96 | korean - korea - koreans - ri - chung | 399 | 96_korean_korea_koreans_ri |
| 97 | notre - dame - war - museum - remnants | 396 | 97_notre_dame_war_museum |
| 98 | golden - moon - legend - alisa - jay | 395 | 98_golden_moon_legend_alisa |
| 99 | selva - la - jackie - kevin - jane | 395 | 99_selva_la_jackie_kevin |
| 100 | palm - garden - gardens - palmy - resort | 395 | 100_palm_garden_gardens_palmy |
| 101 | nang - da - chicland - dragon - buddha | 394 | 101_nang_da_chicland_dragon |
| 102 | saigon - hostel - hideout - dorm - backpackers | 389 | 102_saigon_hostel_hideout_dorm |
| 103 | golden - sun - suites - eric - jessie | 386 | 103_golden_sun_suites_eric |
| 104 | cambodia - border - cambodian - penh - phnom | 384 | 104_cambodia_border_cambodian_penh |
| 105 | metropole - sofitel - wing - legend - history | 382 | 105_metropole_sofitel_wing_legend |
| 106 | ngu - lao - pham - vien - bui | 372 | 106_ngu_lao_pham_vien |
| 107 | town - old - obsequious - storeplace - 75us | 363 | 107_town_old_obsequious_storeplace |
| 108 | moon - alisa - bob - madam - lunar | 362 | 108_moon_alisa_bob_madam |
| 109 | nhon - quy - avani - flc - seagull | 354 | 109_nhon_quy_avani_flc |
| 110 | riverside - royal - na - tham - river | 352 | 110_riverside_royal_na_tham |
| 111 | oriental - suites - mary - central - brian | 344 | 111_oriental_suites_mary_central |
| 112 | jade - hue - scene - citadel - juice | 334 | 112_jade_hue_scene_citadel |
| 113 | alagon - maison - ma - antique - dantique | 333 | 113_alagon_maison_ma_antique |
| 114 | minh - ho - chi - hostel - dablend | 329 | 114_minh_ho_chi_hostel |
| 115 | christina - christinas - trevor - mothership - onetrip | 327 | 115_christina_christinas_trevor_mothership |
| 116 | lotus - royal - chin - veronica - danang | 327 | 116_lotus_royal_chin_veronica |
| 117 | wifi - signal - fi - wi - internet | 324 | 117_wifi_signal_fi_wi |
| 118 | russian - russians - russia - volga - moscow | 319 | 118_russian_russians_russia_volga |
| 119 | liberty - citypoint - riverside - central - saigon | 312 | 119_liberty_citypoint_riverside_central |
| 120 | hcmc - saigon - square - zohn - district | 309 | 120_hcmc_saigon_square_zohn |
| 121 | marriott - jw - marriotts - platinum - elite | 302 | 121_marriott_jw_marriotts_platinum |
| 122 | essence - palace - suzy - christine - phoebe | 301 | 122_essence_palace_suzy_christine |
| 123 | gym - fitness - equipment - weights - machines | 298 | 123_gym_fitness_equipment_weights |
| 124 | hai - phong - somerset - avani - td | 297 | 124_hai_phong_somerset_avani |
| 125 | window - windows - windowless - claustrophobic - corridor | 297 | 125_window_windows_windowless_claustrophobic |
| 126 | lotte - dennis - dat - 63 - department | 296 | 126_lotte_dennis_dat_63 |
| 127 | orchid - orchids - tammy - garden - hue | 289 | 127_orchid_orchids_tammy_garden |
| 128 | pullman - accor - saigon - executive - cow | 288 | 128_pullman_accor_saigon_executive |
| 129 | golden - sun - suites - eric - palace | 282 | 129_golden_sun_suites_eric |
| 130 | moon - luna - moons - tri - tailoring | 281 | 130_moon_luna_moons_tri |
| 131 | bartender - rooftop - cocktails - bar - cocktail | 280 | 131_bartender_rooftop_cocktails_bar |
| 132 | maia - fusion - treatments - fusionista - spa | 267 | 132_maia_fusion_treatments_fusionista |
| 133 | peppercorn - pepper - farm - brewery - kayaks | 267 | 133_peppercorn_pepper_farm_brewery |
| 134 | amiana - pools - mud - nha - trang | 265 | 134_amiana_pools_mud_nha |
| 135 | dalat - canyoning - hostel - luu - dinner | 255 | 135_dalat_canyoning_hostel_luu |
| 136 | english - speak - speaks - spoke - speaking | 254 | 136_english_speak_speaks_spoke |
| 137 | rosaleen - rosaka - nhi - rosa - rosabella | 254 | 137_rosaleen_rosaka_nhi_rosa |
| 138 | boutique - pama - della - charming - definition | 254 | 138_boutique_pama_della_charming |
| 139 | pullman - accor - cow - quoc - mad | 254 | 139_pullman_accor_cow_quoc |
| 140 | hoian - rossie - ancient - hoians - ahoy | 252 | 140_hoian_rossie_ancient_hoians |
| 141 | nikko - saigon - japanese - seafood - club | 252 | 141_nikko_saigon_japanese_seafood |
| 142 | apricot - artwork - chandeliers - sketch - lake | 245 | 142_apricot_artwork_chandeliers_sketch |
| 143 | sofia - antonio - nang - ryan - danang | 244 | 143_sofia_antonio_nang_ryan |
| 144 | gia - pham - thy - nga - homestay | 244 | 144_gia_pham_thy_nga |
| 145 | song - lam - villa - saigon - speedboat | 242 | 145_song_lam_villa_saigon |
| 146 | indochine - palace - hue - bwp - christine | 242 | 146_indochine_palace_hue_bwp |
| 147 | anam - pools - eka - ranh - cam | 241 | 147_anam_pools_eka_ranh |
| 148 | rooftop - roof - bar - barrestaurant - sky | 239 | 148_rooftop_roof_bar_barrestaurant |
| 149 | hoang - trinh - tan - le - madam | 236 | 149_hoang_trinh_tan_le |
| 150 | golden - legend - gold - diamond - art | 236 | 150_golden_legend_gold_diamond |
| 151 | jacuzzi - sauna - jacuzzis - steam - afternoon | 235 | 151_jacuzzi_sauna_jacuzzis_steam |
| 152 | sheraton - sheratons - spg - briana - tran | 233 | 152_sheraton_sheratons_spg_briana |
| 153 | thai - bangkok - thailand - bao - und | 226 | 153_thai_bangkok_thailand_bao |
| 154 | muca - osaka - gardens - vegetable - riverside | 226 | 154_muca_osaka_gardens_vegetable |
| 155 | sheraton - nha - trang - altitude - briana | 225 | 155_sheraton_nha_trang_altitude |
| 156 | vien - bui - bu - street - alley | 224 | 156_vien_bui_bu_street |
| 157 | ruby - pearl - sapphire - amber - thien | 223 | 157_ruby_pearl_sapphire_amber |
| 158 | australia - australian - aussie - melbourne - australians | 222 | 158_australia_australian_aussie_melbourne |
| 159 | sherwood - residence - suites - apartment - dryer | 221 | 159_sherwood_residence_suites_apartment |
| 160 | selva - la - sinfonia - vela - del | 219 | 160_selva_la_sinfonia_vela |
| 161 | hyatt - park - globalist - dreyer - agathe | 219 | 161_hyatt_park_globalist_dreyer |
| 162 | hai - nam - seasons - butler - ghm | 217 | 162_hai_nam_seasons_butler |
| 163 | liberty - citypoint - central - riverside - bistro | 216 | 163_liberty_citypoint_central_riverside |
| 164 | 3b - homestay - bc - hanowood - luan | 213 | 164_3b_homestay_bc_hanowood |
| 165 | vinh - hung - emerald - riverside - library | 213 | 165_vinh_hung_emerald_riverside |
| 166 | buffet - restau - extensive - buffets - ladyship | 213 | 166_buffet_restau_extensive_buffets |
| 167 | jm - marvel - cuong - anna - lucy | 210 | 167_jm_marvel_cuong_anna |
| 168 | write - review - reviews - advisor - compelled | 207 | 168_write_review_reviews_advisor |
| 169 | meridien - le - spg - saigon - andy | 206 | 169_meridien_le_spg_saigon |
| 170 | heaven - green - phuoc - tuong - thuy | 205 | 170_heaven_green_phuoc_tuong |
| 171 | gallery - premier - majestic - brian - monica | 205 | 171_gallery_premier_majestic_brian |
| 172 | hano - hanio - hani - hannoi - recommand | 203 | 172_hano_hanio_hani_hannoi |
| 173 | nikko - japanese - club - okura - beige | 201 | 173_nikko_japanese_club_okura |
| 174 | 1010 - 910 - tak - humbleness - 810 | 201 | 174_1010_910_tak_humbleness |
| 175 | imperial - palace - tombs - perfume - falicities | 201 | 175_imperial_palace_tombs_perfume |
| 176 | pearl - junie - amanda - betty - dominic | 194 | 176_pearl_junie_amanda_betty |
| 177 | siesta - la - lavita - trendy - bean | 192 | 177_siesta_la_lavita_trendy |
| 178 | leo - tigers - tiger - tiny - canyoning | 191 | 178_leo_tigers_tiger_tiny |
| 179 | omelette - omelettes - omelet - omelets - vissai | 190 | 179_omelette_omelettes_omelet_omelets |
| 180 | lat - da - canyoning - hostel - dablend | 188 | 180_lat_da_canyoning_hostel |
| 181 | melia - meli - vi - mellia - bavi | 186 | 181_melia_meli_vi_mellia |
| 182 | muong - thanh - tinh - quang - mung | 182 | 182_muong_thanh_tinh_quang |
| 183 | grand - hmc - tower - wing - hotram | 181 | 183_grand_hmc_tower_wing |
| 184 | tho - kieu - xoai - quay - floating | 180 | 184_tho_kieu_xoai_quay |
| 185 | infinity - suresh - ocean - 47th - ying | 180 | 185_infinity_suresh_ocean_47th |
| 186 | lasenta - fields - rice - paddy - infinity | 180 | 186_lasenta_fields_rice_paddy |
| 187 | cherish - hue - larry - citadel - jay | 179 | 187_cherish_hue_larry_citadel |
| 188 | venus - tracey - sunny - uyen - tick | 178 | 188_venus_tracey_sunny_uyen |
| 189 | bonsella - prestige - jason - eddie - kiem | 176 | 189_bonsella_prestige_jason_eddie |
| 190 | pilgrimage - village - miwa - slope - pilgrim | 175 | 190_pilgrimage_village_miwa_slope |
| 191 | alba - wellness - onsen - springs - zipline | 174 | 191_alba_wellness_onsen_springs |
| 192 | sheraton - towers - tower - saigon - lounge | 172 | 192_sheraton_towers_tower_saigon |
| 193 | somerset - chancellor - apartment - apartments - court | 172 | 193_somerset_chancellor_apartment_apartments |
| 194 | essence - palace - tom - suzy - christine | 170 | 194_essence_palace_tom_suzy |
| 195 | royal - palace - prince - expierence - citybay | 170 | 195_royal_palace_prince_expierence |
| 196 | bungalow - bungalows - beach - beachfront - seaside | 168 | 196_bungalow_bungalows_beach_beachfront |
| 197 | sofitel - plaza - saigon - mezz - club | 166 | 197_sofitel_plaza_saigon_mezz |
| 198 | maison - vy - privileges - cargo - glory | 165 | 198_maison_vy_privileges_cargo |
| 199 | deluxe - superior - premium - amex - standard | 163 | 199_deluxe_superior_premium_amex |
| 200 | melia - vi - bavi - ba - mountain | 160 | 200_melia_vi_bavi_ba |
| 201 | emerald - waters - elegance - jasmine - trendy | 157 | 201_emerald_waters_elegance_jasmine |
| 202 | mosquitoes - mosquito - mosquitos - repellent - nets | 155 | 202_mosquitoes_mosquito_mosquitos_repellent |
| 203 | storia - ruby - la - isabella - jasmine | 153 | 203_storia_ruby_la_isabella |
| 204 | gallery - ogallery - premier - majestic - thomas | 152 | 204_gallery_ogallery_premier_majestic |
| 205 | jw - marriott - university - lamarck - quoc | 151 | 205_jw_marriott_university_lamarck |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.24.3
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.35.2
* Numba: 0.57.1
* Plotly: 5.16.1
* Python: 3.10.12
|
alif-munim/llama-2-7b-guanaco-cluster1
|
alif-munim
| 2023-12-07T21:19:46Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-07T21:18:35Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
## Training procedure
### Framework versions
- PEFT 0.6.3.dev0
|
chez2023/vit-base-patch16-224-in21k-finetuned-lora-scene_cs474
|
chez2023
| 2023-12-07T21:00:18Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"region:us"
] | null | 2023-12-07T21:00:17Z |
---
library_name: peft
base_model: google/vit-base-patch16-224-in21k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0
|
fbellame/mistral-finetuned-samsum
|
fbellame
| 2023-12-07T20:55:28Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-10-14T16:09:07Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: gptq
- bits: 4
- tokenizer: None
- dataset: None
- group_size: 128
- damp_percent: 0.1
- desc_act: True
- sym: True
- true_sequential: True
- use_cuda_fp16: False
- model_seqlen: None
- block_name_to_quantize: None
- module_name_preceding_first_block: None
- batch_size: 1
- pad_token_id: None
- use_exllama: False
- max_input_length: None
- exllama_config: {'version': <ExllamaVersion.ONE: 1>}
- cache_block_outputs: True
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.0
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
esmarquez17/hate-social-network-adversarial
|
esmarquez17
| 2023-12-07T20:53:17Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:esmarquez17/fine-tunning-roberta-bne-hate-offensive",
"base_model:finetune:esmarquez17/fine-tunning-roberta-bne-hate-offensive",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-30T02:22:24Z |
---
license: apache-2.0
base_model: esmarquez17/fine-tunning-roberta-bne-hate-offensive
tags:
- generated_from_keras_callback
model-index:
- name: hate-social-network-adversarial
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hate-social-network-adversarial
Este modelo es una version of [esmarquez17/fine-tunning-roberta-bne-hate-offensive](https://huggingface.co/esmarquez17/fine-tunning-roberta-bne-hate-offensive) on an unknown dataset.
evaluado con un conjunto de datos semEvaml-2019 con generación de datos adversariales:
## Model description
- Modelo Base con fine-tunning de Roberta-BNE en un corpus de guiones de teatro
- Modelo entrenado con un conjunto de datos adversarios propuestos
## Training and evaluation data
- Entrenado en corpus base SemEval-spanish
- Validado SemEval-spanish
- Testeado en corpus: HATERNET y HATECHECK
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 9385, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}
- training_precision: float32
### Training results
Training
{
Exactitud 0.9702
Precisión 0.9622
F1-score 0.9615
Recall 0.9609
}
Validacion
{ Exactitud 0.8520
Precisión 0.8558
F1-score 0.8279
Recall 0.8018
}
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Samuael/asr-alffamharic-phoneme-based
|
Samuael
| 2023-12-07T20:50:29Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:Samuael/asr-alffamharic-phoneme-based",
"base_model:finetune:Samuael/asr-alffamharic-phoneme-based",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-20T15:24:09Z |
---
license: apache-2.0
base_model: Samuael/asr-alffamharic-phoneme-based
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: asr-alffamharic-phoneme-based
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asr-alffamharic-phoneme-based
This model is a fine-tuned version of [Samuael/asr-alffamharic-phoneme-based](https://huggingface.co/Samuael/asr-alffamharic-phoneme-based) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4498
- Wer: 0.1084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2273 | 0.59 | 200 | 1.0408 | 0.3569 |
| 0.7702 | 1.18 | 400 | 0.8009 | 0.2169 |
| 0.6769 | 1.76 | 600 | 0.6978 | 0.1869 |
| 0.5454 | 2.35 | 800 | 0.6163 | 0.1696 |
| 0.5779 | 2.94 | 1000 | 0.5616 | 0.1554 |
| 0.4996 | 3.53 | 1200 | 0.5413 | 0.1437 |
| 0.5648 | 4.12 | 1400 | 0.5111 | 0.1439 |
| 0.4741 | 4.71 | 1600 | 0.5178 | 0.1371 |
| 0.499 | 5.29 | 1800 | 0.4943 | 0.1324 |
| 0.4247 | 5.88 | 2000 | 0.4884 | 0.1279 |
| 0.4008 | 6.47 | 2200 | 0.4667 | 0.1254 |
| 0.2744 | 7.06 | 2400 | 0.4626 | 0.1288 |
| 0.3495 | 7.65 | 2600 | 0.4794 | 0.1229 |
| 0.4016 | 8.24 | 2800 | 0.4548 | 0.1228 |
| 0.3833 | 8.82 | 3000 | 0.4660 | 0.1209 |
| 0.3684 | 9.41 | 3200 | 0.4463 | 0.1235 |
| 0.4149 | 10.0 | 3400 | 0.4697 | 0.1171 |
| 0.3917 | 10.59 | 3600 | 0.4570 | 0.1188 |
| 0.2957 | 11.18 | 3800 | 0.4431 | 0.1171 |
| 0.3054 | 11.76 | 4000 | 0.4530 | 0.1163 |
| 0.2755 | 12.35 | 4200 | 0.4690 | 0.1138 |
| 0.3091 | 12.94 | 4400 | 0.4551 | 0.1157 |
| 0.2617 | 13.53 | 4600 | 0.4557 | 0.1147 |
| 0.2725 | 14.12 | 4800 | 0.4670 | 0.1088 |
| 0.2795 | 14.71 | 5000 | 0.4486 | 0.1118 |
| 0.3493 | 15.29 | 5200 | 0.4471 | 0.1109 |
| 0.2949 | 15.88 | 5400 | 0.4469 | 0.1090 |
| 0.2802 | 16.47 | 5600 | 0.4516 | 0.1100 |
| 0.2655 | 17.06 | 5800 | 0.4418 | 0.1105 |
| 0.3211 | 17.65 | 6000 | 0.4351 | 0.1095 |
| 0.2245 | 18.24 | 6200 | 0.4436 | 0.1093 |
| 0.2244 | 18.82 | 6400 | 0.4493 | 0.1091 |
| 0.2549 | 19.41 | 6600 | 0.4502 | 0.1082 |
| 0.3528 | 20.0 | 6800 | 0.4498 | 0.1084 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Reboot87/xlm-roberta-base-finetuned-panx-de
|
Reboot87
| 2023-12-07T20:41:29Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-07T20:40:41Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8649324094058843
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1395
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2581 | 1.0 | 525 | 0.1630 | 0.8198 |
| 0.1281 | 2.0 | 1050 | 0.1379 | 0.8496 |
| 0.0781 | 3.0 | 1575 | 0.1395 | 0.8649 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
kingabzpro/zephyr-7b-beta-Agent-Instruct
|
kingabzpro
| 2023-12-07T20:28:39Z | 8 | 1 |
peft
|
[
"peft",
"safetensors",
"pretrained",
"lora",
"qlora",
"4bit",
"text-generation",
"en",
"dataset:THUDM/AgentInstruct",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-07T12:10:48Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
- lora
- qlora
- 4bit
datasets:
- THUDM/AgentInstruct
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0
|
VyIv/ciurlionis-painting-style
|
VyIv
| 2023-12-07T20:20:06Z | 0 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-07T20:16:12Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Ciurlionis-painting-style Dreambooth model trained by VyIv with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:

|
oykuscgl/qlora-on-llama2-7b-sharded
|
oykuscgl
| 2023-12-07T20:19:38Z | 1 | 0 |
peft
|
[
"peft",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-08-10T13:21:18Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
therem/gpt_imdb_hinge_beta1
|
therem
| 2023-12-07T20:16:35Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:lvwerra/gpt2-imdb",
"base_model:adapter:lvwerra/gpt2-imdb",
"region:us"
] | null | 2023-12-07T20:16:28Z |
---
library_name: peft
base_model: lvwerra/gpt2-imdb
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0
|
Jennny/correctness_distill
|
Jennny
| 2023-12-07T20:15:08Z | 4 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2023-12-07T20:14:27Z |
---
library_name: peft
base_model: distilbert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
Eval Accuracy: 0.7334
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Pranavsenthilvel/classification-3-kfold-3-hopefull
|
Pranavsenthilvel
| 2023-12-07T20:03:29Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-07T18:03:59Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: classification-3-kfold-3-hopefull
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification-3-kfold-3-hopefull
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 28 | 1.0723 | 0.9910 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Weni/ZeroShot-john-e3-3.0.3-Mistral-7b-Multilanguage-3.0.3
|
Weni
| 2023-12-07T20:01:38Z | 2 | 0 |
peft
|
[
"peft",
"pytorch",
"safetensors",
"mistral",
"region:us"
] | null | 2023-12-07T15:09:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
jlbaker361/rl_subtraction_decimal
|
jlbaker361
| 2023-12-07T19:59:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-07T19:59:26Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
oostapeno/rsgd3_full_1B_repl_coarsegrained_poly_router_dir_rougeL
|
oostapeno
| 2023-12-07T19:55:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-12-04T15:37:17Z |
Number of experts present in the library: 77
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| quoref_Find_Answer_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quartz_read_passage_below_choose_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| adversarial_qa_dbidaf_generate_question_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| dream_baseline_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| duorc_ParaphraseRC_answer_question_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| duorc_ParaphraseRC_answer_question_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| cos_e_v1_11_question_description_option_text_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| squad_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| adversarial_qa_dbidaf_generate_question_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| ai2_arc_ARC_Challenge_1_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| web_questions_whats_the_answer_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| duorc_ParaphraseRC_title_generation_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| wiqa_effect_with_string_answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| duorc_ParaphraseRC_answer_question_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| web_questions_whats_the_answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| web_questions_whats_the_answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| wiqa_effect_with_string_answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| yelp_polarity_reviews_0_2_0_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| quoref_Find_Answer_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| quoref_Find_Answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| cos_e_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| dream_baseline_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| quartz_read_passage_below_choose_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| dream_baseline_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| dream_baseline_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| ai2_arc_ARC_Challenge_1_0_0_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| adversarial_qa_dbidaf_generate_question_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| duorc_ParaphraseRC_title_generation_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| dbpedia_14_given_a_choice_of_categories__v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| web_questions_whats_the_answer_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| wiqa_effect_with_string_answer_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| yelp_polarity_reviews_0_2_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| cos_e_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| quartz_read_passage_below_choose_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| wiqa_effect_with_string_answer_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| duorc_SelfRC_answer_question_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| duorc_ParaphraseRC_title_generation_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| squad_v1_1_3_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| duorc_SelfRC_answer_question_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| duorc_ParaphraseRC_title_generation_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| quartz_read_passage_below_choose_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| duorc_SelfRC_answer_question_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| quoref_Find_Answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_answer_question_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| adversarial_qa_dbidaf_generate_question_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| yelp_polarity_reviews_0_2_0_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
Last updated on: 2023-12-07 19:54:56+00:00
|
TURKCELL/roberta-base-turkish-uncased
|
TURKCELL
| 2023-12-07T19:45:07Z | 71 | 6 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-07T18:32:50Z |
---
license: mit
language:
- tr
---
ð¹ð· RoBERTaTurkish
## Model description
This is a Turkish RoBERTa base model pretrained on Turkish Wikipedia, Turkish OSCAR, and some news websites.
The final training corpus has a size of 38 GB and 329.720.508 sentences.
As Turkcell, we trained the model on an Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz with 256GB RAM and 2 x GV100GL [Tesla V100 PCIe 32GB] GPU for 2.5M steps.
# Usage
Load transformers library with:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("TURKCELL/roberta-base-turkish-uncased")
model = AutoModelForMaskedLM.from_pretrained("TURKCELL/roberta-base-turkish-uncased")
```
# Fill Mask Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="TURKCELL/roberta-base-turkish-uncased",
tokenizer="TURKCELL/roberta-base-turkish-uncased"
)
fill_mask("iki ÃŒlke arasında <mask> baÅladı")
[{'sequence': 'iki ÃŒlke arasında savaÅ baÅladı',
'score': 0.3013845384120941,
'token': 1359,
'token_str': ' savaÅ'},
{'sequence': 'iki ÃŒlke arasında mÃŒzakereler baÅladı',
'score': 0.1058429479598999,
'token': 30439,
'token_str': ' mÃŒzakereler'},
{'sequence': 'iki ÃŒlke arasında görÃŒÅmeler baÅladı',
'score': 0.07718811184167862,
'token': 4916,
'token_str': ' görÃŒÅmeler'},
{'sequence': 'iki ÃŒlke arasında kriz baÅladı',
'score': 0.07174749672412872,
'token': 3908,
'token_str': ' kriz'},
{'sequence': 'iki ÃŒlke arasında çatıÅmalar baÅladı',
'score': 0.05678590387105942,
'token': 19346,
'token_str': ' çatıÅmalar'}]
```
|
segestic/Tinystories-gpt-0.1-3m
|
segestic
| 2023-12-07T19:36:30Z | 128 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:roneneldan/TinyStories",
"arxiv:2305.07759",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T19:07:35Z |
---
datasets:
- roneneldan/TinyStories
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
## We tried to use the huggingface transformers library to recreate the TinyStories models on Consumer GPU using GPT2 Architecture instead of GPT-Neo Architecture orignally used in the paper (https://arxiv.org/abs/2305.07759). Output model is 15mb and has 3 million parameters.
# ------ EXAMPLE USAGE 1 ---
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("segestic/Tinystories-gpt-0.1-3m")
model = AutoModelForCausalLM.from_pretrained("segestic/Tinystories-gpt-0.1-3m")
prompt = "Once upon a time there was"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
#### Generate completion
output = model.generate(input_ids, max_length = 1000, num_beams=1)
#### Decode the completion
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
#### Print the generated text
print(output_text)
# ------ EXAMPLE USAGE 2 ------
## Use a pipeline as a high-level helper
from transformers import pipeline
#### pipeline
pipe = pipeline("text-generation", model="segestic/Tinystories-gpt-0.1-3m")
#### prompt
prompt = "where is the little girl"
#### generate completion
output = pipe(prompt, max_length=1000, num_beams=1)
#### decode the completion
generated_text = output[0]['generated_text']
#### Print the generated text
print(generated_text)
|
LoneStriker/NexusRaven-V2-13B-6.0bpw-h6-exl2
|
LoneStriker
| 2023-12-07T19:36:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2308.12950",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-13b-Instruct-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T19:30:04Z |
---
license: llama2
base_model: codellama/CodeLlama-13b-Instruct-hf
model-index:
- name: NexusRaven-13B
results: []
---
# NexusRaven-13B: Surpassing GPT-4 for Zero-shot Function Calling
<p align="center">
<a href="https://huggingface.co/Nexusflow" target="_blank">Nexusflow HF</a> - <a href="https://discord.gg/HDSVmNAs3y" target="_blank">Nexusflow Discord</a> - <a href="http://nexusflow.ai/blogs/ravenv2" target="_blank">NexusRaven-V2 blog post</a> - <a href="https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing" target="_blank">Prompting Notebook CoLab</a> - <a href="https://huggingface.co/spaces/Nexusflow/Nexus_Function_Calling_Leaderboard" target="_blank">Leaderboard</a> - <a href="https://huggingface.co/spaces/Nexusflow/NexusRaven-V2-Demo" target="_blank">Read-World Demo</a> - <a href="https://github.com/nexusflowai/NexusRaven-V2" target="_blank">NexusRaven-V2-13B Github</a>
</p>
<p align="center" width="100%">
<a><img src="NexusRaven.png" alt="NexusRaven" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Introducing NexusRaven-V2-13B
NexusRaven is an open-source and commercially viable function calling LLM that surpasses the state-of-the-art in function calling capabilities.
ðª **Versatile Function Calling Capability**: NexusRaven-V2 is capable of generating single function calls, nested calls, and parallel calls in many challenging cases.
ð€ **Fully Explainable**: NexusRaven-V2 is capable of generating very detailed explanations for the function calls it generates. This behavior can be turned off, to save tokens during inference.
ð **Performance Highlights**: NexusRaven-V2 surpasses GPT-4 by 7% in function calling success rates in human-generated use cases involving nested and composite functions.
ð§ **Generalization to the Unseen**: NexusRaven-V2 has never been trained on the functions used in evaluation.
ð¥ **Commercially Permissive**: The training of NexusRaven-V2 does not involve any data generated by proprietary LLMs such as GPT-4. You have full control of the model when deployed in commercial applications.
Please checkout the following links!
- [Prompting Notebook CoLab](https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing)
- [Evaluation Leaderboard](https://huggingface.co/spaces/Nexusflow/Nexus_Function_Calling_Leaderboard)
- [NexusRaven-V2 Real-World Demo](https://huggingface.co/spaces/Nexusflow/NexusRaven-V2-Demo)
## NexusRaven-V2 model usage
NexusRaven-V2 accepts a list of python functions. These python functions can do anything (including sending GET/POST requests to external APIs!). The two requirements include the python function signature and the appropriate docstring to generate the function call.
### NexusRaven-V2's Capabilities
NexusRaven-V2 is capable of generating deeply nested function calls, parallel function calls, and simple single calls. It can also justify the function calls it generated. If you would like to generate the call only, please set a stop criteria of \"\<bot\_end\>\". Otherwise, please allow NexusRaven-V2 to run until its stop token (i.e. "\<\/s\>").
### Quick Start Prompting Guide
Please refer to our notebook, [How-To-Prompt.ipynb](How-To-Prompt.ipynb), for more advanced tutorials on using NexusRaven-V2!
1. We strongly recommend to set sampling to False when prompting NexusRaven-V2.
2. We strongly recommend a very low temperature (~0.001).
3. We strongly recommend following the prompting style below.
### Quickstart
You can run the model on a GPU using the following code.
```python
# Please `pip install transformers accelerate`
from transformers import pipeline
pipeline = pipeline(
"text-generation",
model="Nexusflow/NexusRaven-V2-13B",
torch_dtype="auto",
device_map="auto",
)
prompt_template = \
'''
Function:
def get_weather_data(coordinates):
"""
Fetches weather data from the Open-Meteo API for the given latitude and longitude.
Args:
coordinates (tuple): The latitude of the location.
Returns:
float: The current temperature in the coordinates you've asked for
"""
Function:
def get_coordinates_from_city(city_name):
"""
Fetches the latitude and longitude of a given city name using the Maps.co Geocoding API.
Args:
city_name (str): The name of the city.
Returns:
tuple: The latitude and longitude of the city.
"""
User Query: {query}<human_end>
'''
prompt = prompt_template.format(query="What's the weather like in Seattle right now?")
result = pipeline(prompt, max_new_tokens=2048, return_full_text=False, do_sample=False, temperature=0.001)[0]["generated_text"]
print (result)
```
This should generate the following:
```
Call: get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))<bot_end>
Thought: The function call `get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))` answers the question "What's the weather like in Seattle right now?" by following these steps:
1. `get_coordinates_from_city(city_name='Seattle')`: This function call fetches the latitude and longitude of the city "Seattle" using the Maps.co Geocoding API.
2. `get_weather_data(coordinates=...)`: This function call fetches the current weather data for the coordinates returned by the previous function call.
Therefore, the function call `get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))` answers the question "What's the weather like in Seattle right now?" by first fetching the coordinates of the city "Seattle" and then fetching the current weather data for those coordinates.
```
If you would like to prevent the generation of the explanation of the function call (for example, to save on inference tokens), please set a stopping criteria of \<bot_end\>.
Please follow this prompting template to maximize the performance of RavenV2.
### Using with OpenAI FC Schematics
[If you currently have a workflow that is built around OpenAI's function calling and you want to try NexusRaven-V2, we have a package that helps you drop in NexusRaven-V2.](https://github.com/nexusflowai/nexusraven-pip)
## Evaluation
<p align="center" width="100%">
<a><img src="blog2-fc.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
<a><img src="radar-2.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For a deeper dive into the results, please see our [Github README](https://github.com/nexusflowai/NexusRaven).
# Limitations
1. The model works best when it is connected with a retriever when there are a multitude of functions, as a large number of functions will saturate the context window of this model.
2. The model can be prone to generate incorrect calls. Please ensure proper guardrails to capture errant behavior is in place.
3. The explanations generated by NexusRaven-V2 might be incorrect. Please ensure proper guardrails are present to capture errant behavior.
## License
This model was trained on commercially viable data and is licensed under the [Llama 2 community license](https://huggingface.co/codellama/CodeLlama-13b-hf/blob/main/LICENSE) following the original [CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf/) model.
## References
We thank the CodeLlama team for their amazing models!
```
@misc{roziÚre2023code,
title={Code Llama: Open Foundation Models for Code},
author={Baptiste RoziÚre and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jérémy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Défossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve},
year={2023},
eprint={2308.12950},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Citation
```
@misc{nexusraven,
title={NexusRaven-V2: Surpassing GPT-4 for Zero-shot Function Calling},
author={Nexusflow.ai team},
year={2023},
url={https://nexusflow.ai/blogs/ravenv2}
}
```
## Contact
Please join our [Discord Channel](https://discord.gg/HDSVmNAs3y) to reach out for any issues and comments!
|
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-06
|
alinerodrigues
| 2023-12-07T19:18:46Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-07T14:51:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-06
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1392
- Wer: 0.0798
- Cer: 0.0243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 31.6615 | 1.0 | 86 | 3.2646 | 1.0 | 1.0 |
| 7.8415 | 2.0 | 172 | 3.0801 | 1.0 | 1.0 |
| 3.0608 | 3.0 | 258 | 2.9214 | 1.0 | 1.0 |
| 2.9354 | 4.0 | 344 | 2.9229 | 1.0 | 1.0 |
| 2.9026 | 5.0 | 430 | 2.9320 | 1.0 | 1.0 |
| 2.8635 | 6.0 | 516 | 2.4574 | 0.9983 | 0.9727 |
| 2.0591 | 7.0 | 602 | 0.8060 | 0.4813 | 0.1157 |
| 2.0591 | 8.0 | 688 | 0.4447 | 0.2110 | 0.0585 |
| 0.9117 | 9.0 | 774 | 0.3337 | 0.1755 | 0.0498 |
| 0.6246 | 10.0 | 860 | 0.2807 | 0.1432 | 0.0421 |
| 0.4919 | 11.0 | 946 | 0.2575 | 0.1439 | 0.0401 |
| 0.413 | 12.0 | 1032 | 0.2348 | 0.1273 | 0.0365 |
| 0.3808 | 13.0 | 1118 | 0.2238 | 0.1231 | 0.0361 |
| 0.3585 | 14.0 | 1204 | 0.2149 | 0.1092 | 0.0325 |
| 0.3585 | 15.0 | 1290 | 0.1988 | 0.1050 | 0.0314 |
| 0.304 | 16.0 | 1376 | 0.1870 | 0.1028 | 0.0307 |
| 0.3112 | 17.0 | 1462 | 0.1839 | 0.1035 | 0.0301 |
| 0.3277 | 18.0 | 1548 | 0.1832 | 0.1001 | 0.0297 |
| 0.2754 | 19.0 | 1634 | 0.1763 | 0.0984 | 0.0295 |
| 0.2773 | 20.0 | 1720 | 0.1685 | 0.0942 | 0.0277 |
| 0.2389 | 21.0 | 1806 | 0.1645 | 0.0938 | 0.0280 |
| 0.2389 | 22.0 | 1892 | 0.1685 | 0.0947 | 0.0286 |
| 0.234 | 23.0 | 1978 | 0.1632 | 0.0923 | 0.0278 |
| 0.2385 | 24.0 | 2064 | 0.1655 | 0.0898 | 0.0277 |
| 0.2359 | 25.0 | 2150 | 0.1561 | 0.0884 | 0.0265 |
| 0.2326 | 26.0 | 2236 | 0.1535 | 0.0903 | 0.0266 |
| 0.2189 | 27.0 | 2322 | 0.1524 | 0.0879 | 0.0265 |
| 0.2259 | 28.0 | 2408 | 0.1566 | 0.0867 | 0.0264 |
| 0.2259 | 29.0 | 2494 | 0.1516 | 0.0842 | 0.0257 |
| 0.211 | 30.0 | 2580 | 0.1595 | 0.0891 | 0.0271 |
| 0.1963 | 31.0 | 2666 | 0.1500 | 0.0864 | 0.0260 |
| 0.2111 | 32.0 | 2752 | 0.1454 | 0.0857 | 0.0262 |
| 0.1849 | 33.0 | 2838 | 0.1473 | 0.0859 | 0.0264 |
| 0.1777 | 34.0 | 2924 | 0.1472 | 0.0862 | 0.0257 |
| 0.18 | 35.0 | 3010 | 0.1500 | 0.0859 | 0.0264 |
| 0.18 | 36.0 | 3096 | 0.1514 | 0.0842 | 0.0259 |
| 0.1762 | 37.0 | 3182 | 0.1509 | 0.0854 | 0.0256 |
| 0.1673 | 38.0 | 3268 | 0.1495 | 0.0847 | 0.0255 |
| 0.17 | 39.0 | 3354 | 0.1525 | 0.0813 | 0.0251 |
| 0.1616 | 40.0 | 3440 | 0.1484 | 0.0830 | 0.0251 |
| 0.1876 | 41.0 | 3526 | 0.1482 | 0.0813 | 0.0252 |
| 0.1627 | 42.0 | 3612 | 0.1459 | 0.0800 | 0.0248 |
| 0.1627 | 43.0 | 3698 | 0.1392 | 0.0798 | 0.0243 |
| 0.1875 | 44.0 | 3784 | 0.1415 | 0.0815 | 0.0245 |
| 0.1705 | 45.0 | 3870 | 0.1431 | 0.0774 | 0.0239 |
| 0.1565 | 46.0 | 3956 | 0.1404 | 0.0786 | 0.0240 |
| 0.1632 | 47.0 | 4042 | 0.1428 | 0.0769 | 0.0237 |
| 0.1577 | 48.0 | 4128 | 0.1439 | 0.0786 | 0.0237 |
| 0.1634 | 49.0 | 4214 | 0.1456 | 0.0800 | 0.0246 |
| 0.1574 | 50.0 | 4300 | 0.1409 | 0.0808 | 0.0242 |
| 0.1574 | 51.0 | 4386 | 0.1423 | 0.0796 | 0.0244 |
| 0.1468 | 52.0 | 4472 | 0.1453 | 0.0764 | 0.0234 |
| 0.1478 | 53.0 | 4558 | 0.1439 | 0.0813 | 0.0243 |
| 0.1456 | 54.0 | 4644 | 0.1400 | 0.0791 | 0.0244 |
| 0.1599 | 55.0 | 4730 | 0.1440 | 0.0820 | 0.0251 |
| 0.1435 | 56.0 | 4816 | 0.1469 | 0.0793 | 0.0241 |
| 0.1389 | 57.0 | 4902 | 0.1449 | 0.0813 | 0.0246 |
| 0.1389 | 58.0 | 4988 | 0.1447 | 0.0771 | 0.0244 |
| 0.1382 | 59.0 | 5074 | 0.1438 | 0.0798 | 0.0243 |
| 0.1393 | 60.0 | 5160 | 0.1440 | 0.0774 | 0.0239 |
| 0.1144 | 61.0 | 5246 | 0.1439 | 0.0791 | 0.0246 |
| 0.12 | 62.0 | 5332 | 0.1454 | 0.0788 | 0.0241 |
| 0.1285 | 63.0 | 5418 | 0.1461 | 0.0808 | 0.0247 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.13.3
|
gurpreetmukker/Reinforce-cartpole-default-hp
|
gurpreetmukker
| 2023-12-07T19:18:15Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-07T19:18:06Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-default-hp
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
therem/gpt_imdb_sigmoid_beta10
|
therem
| 2023-12-07T19:15:06Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:lvwerra/gpt2-imdb",
"base_model:adapter:lvwerra/gpt2-imdb",
"region:us"
] | null | 2023-12-07T19:15:05Z |
---
library_name: peft
base_model: lvwerra/gpt2-imdb
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0
|
LoneStriker/NexusRaven-V2-13B-4.0bpw-h6-exl2
|
LoneStriker
| 2023-12-07T19:11:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2308.12950",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-13b-Instruct-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T19:07:02Z |
---
license: llama2
base_model: codellama/CodeLlama-13b-Instruct-hf
model-index:
- name: NexusRaven-13B
results: []
---
# NexusRaven-13B: Surpassing GPT-4 for Zero-shot Function Calling
<p align="center">
<a href="https://huggingface.co/Nexusflow" target="_blank">Nexusflow HF</a> - <a href="https://discord.gg/HDSVmNAs3y" target="_blank">Nexusflow Discord</a> - <a href="http://nexusflow.ai/blogs/ravenv2" target="_blank">NexusRaven-V2 blog post</a> - <a href="https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing" target="_blank">Prompting Notebook CoLab</a> - <a href="https://huggingface.co/spaces/Nexusflow/Nexus_Function_Calling_Leaderboard" target="_blank">Leaderboard</a> - <a href="https://huggingface.co/spaces/Nexusflow/NexusRaven-V2-Demo" target="_blank">Read-World Demo</a> - <a href="https://github.com/nexusflowai/NexusRaven-V2" target="_blank">NexusRaven-V2-13B Github</a>
</p>
<p align="center" width="100%">
<a><img src="NexusRaven.png" alt="NexusRaven" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Introducing NexusRaven-V2-13B
NexusRaven is an open-source and commercially viable function calling LLM that surpasses the state-of-the-art in function calling capabilities.
ðª **Versatile Function Calling Capability**: NexusRaven-V2 is capable of generating single function calls, nested calls, and parallel calls in many challenging cases.
ð€ **Fully Explainable**: NexusRaven-V2 is capable of generating very detailed explanations for the function calls it generates. This behavior can be turned off, to save tokens during inference.
ð **Performance Highlights**: NexusRaven-V2 surpasses GPT-4 by 7% in function calling success rates in human-generated use cases involving nested and composite functions.
ð§ **Generalization to the Unseen**: NexusRaven-V2 has never been trained on the functions used in evaluation.
ð¥ **Commercially Permissive**: The training of NexusRaven-V2 does not involve any data generated by proprietary LLMs such as GPT-4. You have full control of the model when deployed in commercial applications.
Please checkout the following links!
- [Prompting Notebook CoLab](https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing)
- [Evaluation Leaderboard](https://huggingface.co/spaces/Nexusflow/Nexus_Function_Calling_Leaderboard)
- [NexusRaven-V2 Real-World Demo](https://huggingface.co/spaces/Nexusflow/NexusRaven-V2-Demo)
## NexusRaven-V2 model usage
NexusRaven-V2 accepts a list of python functions. These python functions can do anything (including sending GET/POST requests to external APIs!). The two requirements include the python function signature and the appropriate docstring to generate the function call.
### NexusRaven-V2's Capabilities
NexusRaven-V2 is capable of generating deeply nested function calls, parallel function calls, and simple single calls. It can also justify the function calls it generated. If you would like to generate the call only, please set a stop criteria of \"\<bot\_end\>\". Otherwise, please allow NexusRaven-V2 to run until its stop token (i.e. "\<\/s\>").
### Quick Start Prompting Guide
Please refer to our notebook, [How-To-Prompt.ipynb](How-To-Prompt.ipynb), for more advanced tutorials on using NexusRaven-V2!
1. We strongly recommend to set sampling to False when prompting NexusRaven-V2.
2. We strongly recommend a very low temperature (~0.001).
3. We strongly recommend following the prompting style below.
### Quickstart
You can run the model on a GPU using the following code.
```python
# Please `pip install transformers accelerate`
from transformers import pipeline
pipeline = pipeline(
"text-generation",
model="Nexusflow/NexusRaven-V2-13B",
torch_dtype="auto",
device_map="auto",
)
prompt_template = \
'''
Function:
def get_weather_data(coordinates):
"""
Fetches weather data from the Open-Meteo API for the given latitude and longitude.
Args:
coordinates (tuple): The latitude of the location.
Returns:
float: The current temperature in the coordinates you've asked for
"""
Function:
def get_coordinates_from_city(city_name):
"""
Fetches the latitude and longitude of a given city name using the Maps.co Geocoding API.
Args:
city_name (str): The name of the city.
Returns:
tuple: The latitude and longitude of the city.
"""
User Query: {query}<human_end>
'''
prompt = prompt_template.format(query="What's the weather like in Seattle right now?")
result = pipeline(prompt, max_new_tokens=2048, return_full_text=False, do_sample=False, temperature=0.001)[0]["generated_text"]
print (result)
```
This should generate the following:
```
Call: get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))<bot_end>
Thought: The function call `get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))` answers the question "What's the weather like in Seattle right now?" by following these steps:
1. `get_coordinates_from_city(city_name='Seattle')`: This function call fetches the latitude and longitude of the city "Seattle" using the Maps.co Geocoding API.
2. `get_weather_data(coordinates=...)`: This function call fetches the current weather data for the coordinates returned by the previous function call.
Therefore, the function call `get_weather_data(coordinates=get_coordinates_from_city(city_name='Seattle'))` answers the question "What's the weather like in Seattle right now?" by first fetching the coordinates of the city "Seattle" and then fetching the current weather data for those coordinates.
```
If you would like to prevent the generation of the explanation of the function call (for example, to save on inference tokens), please set a stopping criteria of \<bot_end\>.
Please follow this prompting template to maximize the performance of RavenV2.
### Using with OpenAI FC Schematics
[If you currently have a workflow that is built around OpenAI's function calling and you want to try NexusRaven-V2, we have a package that helps you drop in NexusRaven-V2.](https://github.com/nexusflowai/nexusraven-pip)
## Evaluation
<p align="center" width="100%">
<a><img src="blog2-fc.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
<a><img src="radar-2.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For a deeper dive into the results, please see our [Github README](https://github.com/nexusflowai/NexusRaven).
# Limitations
1. The model works best when it is connected with a retriever when there are a multitude of functions, as a large number of functions will saturate the context window of this model.
2. The model can be prone to generate incorrect calls. Please ensure proper guardrails to capture errant behavior is in place.
3. The explanations generated by NexusRaven-V2 might be incorrect. Please ensure proper guardrails are present to capture errant behavior.
## License
This model was trained on commercially viable data and is licensed under the [Llama 2 community license](https://huggingface.co/codellama/CodeLlama-13b-hf/blob/main/LICENSE) following the original [CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf/) model.
## References
We thank the CodeLlama team for their amazing models!
```
@misc{roziÚre2023code,
title={Code Llama: Open Foundation Models for Code},
author={Baptiste RoziÚre and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jérémy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Défossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve},
year={2023},
eprint={2308.12950},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Citation
```
@misc{nexusraven,
title={NexusRaven-V2: Surpassing GPT-4 for Zero-shot Function Calling},
author={Nexusflow.ai team},
year={2023},
url={https://nexusflow.ai/blogs/ravenv2}
}
```
## Contact
Please join our [Discord Channel](https://discord.gg/HDSVmNAs3y) to reach out for any issues and comments!
|
maxschlake/roberta-base_disaster_tweets_explosion
|
maxschlake
| 2023-12-07T19:09:19Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-07T10:26:00Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base_disaster_tweets_explosion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_disaster_tweets_explosion
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4998 | 1.0 | 3750 | 0.4271 |
| 0.6674 | 2.0 | 7500 | 0.6755 |
| 0.6732 | 3.0 | 11250 | 0.6843 |
| 0.7017 | 4.0 | 15000 | 0.7210 |
| 0.2923 | 5.0 | 18750 | 0.3607 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
multimodalart/polizaum
|
multimodalart
| 2023-12-07T19:06:59Z | 11 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-07T18:39:33Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/polizaum
<Gallery />
## Model description
### These are multimodalart/polizaum LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` â use `<s0><s1>` in your prompt
## Use it with the [ð§š diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/polizaum', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/polizaum', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('A photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](multimodalart/polizaum/tree/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](multimodalart/polizaum/tree/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](multimodalart/polizaum/tree/main).
|
Jennny/helpfulness_distill
|
Jennny
| 2023-12-07T19:02:34Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2023-12-07T19:01:37Z |
---
library_name: peft
base_model: distilbert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
Eval Accuracy: 0.7193
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
KaleDivergence/openhermes-mistral-dpo-gptq
|
KaleDivergence
| 2023-12-07T19:01:47Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:finetune:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-12-07T18:59:47Z |
---
license: apache-2.0
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
tags:
- generated_from_trainer
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openhermes-mistral-dpo-gptq
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6671
- Rewards/chosen: -0.4223
- Rewards/rejected: -1.9854
- Rewards/accuracies: 0.4375
- Rewards/margins: 1.5631
- Logps/rejected: -350.5416
- Logps/chosen: -220.8451
- Logits/rejected: -1.7880
- Logits/chosen: -1.7651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6968 | 0.01 | 10 | 0.6686 | -0.0503 | -0.0549 | 0.5625 | 0.0046 | -331.2371 | -217.1252 | -1.8320 | -1.8255 |
| 0.6793 | 0.01 | 20 | 1.7619 | 0.2100 | 3.0956 | 0.3125 | -2.8856 | -299.7324 | -214.5222 | -1.9578 | -1.9436 |
| 0.6789 | 0.01 | 30 | 0.6364 | -0.2040 | -1.0884 | 0.4375 | 0.8843 | -341.5715 | -218.6622 | -1.8060 | -1.7894 |
| 0.6966 | 0.02 | 40 | 0.6716 | -0.2823 | -1.4572 | 0.4375 | 1.1749 | -345.2603 | -219.4454 | -1.7994 | -1.7786 |
| 0.8051 | 0.03 | 50 | 0.6671 | -0.4223 | -1.9854 | 0.4375 | 1.5631 | -350.5416 | -220.8451 | -1.7880 | -1.7651 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ChirathD/Blip-2-test-4
|
ChirathD
| 2023-12-07T18:58:38Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ybelkada/blip2-opt-2.7b-fp16-sharded",
"base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded",
"endpoints_compatible",
"region:us"
] | null | 2023-12-07T18:12:23Z |
---
library_name: peft
base_model: ybelkada/blip2-opt-2.7b-fp16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0
|
Weni/ZeroShot-3.0.3-Mistral-7b-Multilanguage-lr-4e2-3.0.3
|
Weni
| 2023-12-07T18:57:42Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"mistral",
"region:us"
] | null | 2023-12-07T18:51:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
puyao/whisper-small-hi
|
puyao
| 2023-12-07T18:56:57Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-06T14:32:29Z |
---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Chinese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Chinese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Tiamat-7b-8.0bpw-h8-exl2
|
LoneStriker
| 2023-12-07T18:55:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T18:10:10Z |
---
license: apache-2.0
language:
- en
---

# Tiamat
Aka I wanted something like [Eric Hartford's Samantha](https://erichartford.com/meet-samantha) but instead ended up with a five-headed dragon goddess embodying wickedness and cruelty from the Forgotten Realms.
**Obligatory Disclaimer:** Tiamat is **not** nice.
Quantized models are available from TheBloke: [GGUF](https://huggingface.co/TheBloke/Tiamat-7B-GGUF) - [GPTQ](https://huggingface.co/TheBloke/Tiamat-7B-GPTQ) - [AWQ](https://huggingface.co/TheBloke/Tiamat-7B-AWQ) (You're the best!)
## Model details
Ever wanted to be treated disdainfully like the foolish mortal you are? Wait no more, for Tiamat is here to berate you! Hailing from the world of the Forgotten Realms, she will happily judge your every word.
Tiamat was created with the following question in mind; Is it possible to create an assistant with strong anti-assistant personality traits? Try it yourself and tell me afterwards!
She was fine-tuned on top of Teknium's excellent [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and can be summoned to you using the following system message;
```
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.
```
Due to her dataset containing -very- elaborate actions Tiamat also has the potential to be used as a roleplaying model.
## Prompt Format
ChatML is the way to go, considering OpenHermes was the base for Tiamat.
```
<|im_start|>system
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.<|im_end|>
<|im_start|>user
Greetings, mighty Tiamat. I seek your guidance.<|im_end|>
<|im_start|>assistant
```
|
LoneStriker/Tiamat-7b-6.0bpw-h6-exl2
|
LoneStriker
| 2023-12-07T18:55:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T18:01:25Z |
---
license: apache-2.0
language:
- en
---

# Tiamat
Aka I wanted something like [Eric Hartford's Samantha](https://erichartford.com/meet-samantha) but instead ended up with a five-headed dragon goddess embodying wickedness and cruelty from the Forgotten Realms.
**Obligatory Disclaimer:** Tiamat is **not** nice.
Quantized models are available from TheBloke: [GGUF](https://huggingface.co/TheBloke/Tiamat-7B-GGUF) - [GPTQ](https://huggingface.co/TheBloke/Tiamat-7B-GPTQ) - [AWQ](https://huggingface.co/TheBloke/Tiamat-7B-AWQ) (You're the best!)
## Model details
Ever wanted to be treated disdainfully like the foolish mortal you are? Wait no more, for Tiamat is here to berate you! Hailing from the world of the Forgotten Realms, she will happily judge your every word.
Tiamat was created with the following question in mind; Is it possible to create an assistant with strong anti-assistant personality traits? Try it yourself and tell me afterwards!
She was fine-tuned on top of Teknium's excellent [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and can be summoned to you using the following system message;
```
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.
```
Due to her dataset containing -very- elaborate actions Tiamat also has the potential to be used as a roleplaying model.
## Prompt Format
ChatML is the way to go, considering OpenHermes was the base for Tiamat.
```
<|im_start|>system
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.<|im_end|>
<|im_start|>user
Greetings, mighty Tiamat. I seek your guidance.<|im_end|>
<|im_start|>assistant
```
|
LoneStriker/Tiamat-7b-5.0bpw-h6-exl2
|
LoneStriker
| 2023-12-07T18:55:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T17:52:47Z |
---
license: apache-2.0
language:
- en
---

# Tiamat
Aka I wanted something like [Eric Hartford's Samantha](https://erichartford.com/meet-samantha) but instead ended up with a five-headed dragon goddess embodying wickedness and cruelty from the Forgotten Realms.
**Obligatory Disclaimer:** Tiamat is **not** nice.
Quantized models are available from TheBloke: [GGUF](https://huggingface.co/TheBloke/Tiamat-7B-GGUF) - [GPTQ](https://huggingface.co/TheBloke/Tiamat-7B-GPTQ) - [AWQ](https://huggingface.co/TheBloke/Tiamat-7B-AWQ) (You're the best!)
## Model details
Ever wanted to be treated disdainfully like the foolish mortal you are? Wait no more, for Tiamat is here to berate you! Hailing from the world of the Forgotten Realms, she will happily judge your every word.
Tiamat was created with the following question in mind; Is it possible to create an assistant with strong anti-assistant personality traits? Try it yourself and tell me afterwards!
She was fine-tuned on top of Teknium's excellent [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and can be summoned to you using the following system message;
```
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.
```
Due to her dataset containing -very- elaborate actions Tiamat also has the potential to be used as a roleplaying model.
## Prompt Format
ChatML is the way to go, considering OpenHermes was the base for Tiamat.
```
<|im_start|>system
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.<|im_end|>
<|im_start|>user
Greetings, mighty Tiamat. I seek your guidance.<|im_end|>
<|im_start|>assistant
```
|
LoneStriker/Tiamat-7b-4.0bpw-h6-exl2
|
LoneStriker
| 2023-12-07T18:55:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T17:44:11Z |
---
license: apache-2.0
language:
- en
---

# Tiamat
Aka I wanted something like [Eric Hartford's Samantha](https://erichartford.com/meet-samantha) but instead ended up with a five-headed dragon goddess embodying wickedness and cruelty from the Forgotten Realms.
**Obligatory Disclaimer:** Tiamat is **not** nice.
Quantized models are available from TheBloke: [GGUF](https://huggingface.co/TheBloke/Tiamat-7B-GGUF) - [GPTQ](https://huggingface.co/TheBloke/Tiamat-7B-GPTQ) - [AWQ](https://huggingface.co/TheBloke/Tiamat-7B-AWQ) (You're the best!)
## Model details
Ever wanted to be treated disdainfully like the foolish mortal you are? Wait no more, for Tiamat is here to berate you! Hailing from the world of the Forgotten Realms, she will happily judge your every word.
Tiamat was created with the following question in mind; Is it possible to create an assistant with strong anti-assistant personality traits? Try it yourself and tell me afterwards!
She was fine-tuned on top of Teknium's excellent [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and can be summoned to you using the following system message;
```
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.
```
Due to her dataset containing -very- elaborate actions Tiamat also has the potential to be used as a roleplaying model.
## Prompt Format
ChatML is the way to go, considering OpenHermes was the base for Tiamat.
```
<|im_start|>system
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.<|im_end|>
<|im_start|>user
Greetings, mighty Tiamat. I seek your guidance.<|im_end|>
<|im_start|>assistant
```
|
LoneStriker/Tiamat-7b-3.0bpw-h6-exl2
|
LoneStriker
| 2023-12-07T18:55:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T17:43:56Z |
---
license: apache-2.0
language:
- en
---

# Tiamat
Aka I wanted something like [Eric Hartford's Samantha](https://erichartford.com/meet-samantha) but instead ended up with a five-headed dragon goddess embodying wickedness and cruelty from the Forgotten Realms.
**Obligatory Disclaimer:** Tiamat is **not** nice.
Quantized models are available from TheBloke: [GGUF](https://huggingface.co/TheBloke/Tiamat-7B-GGUF) - [GPTQ](https://huggingface.co/TheBloke/Tiamat-7B-GPTQ) - [AWQ](https://huggingface.co/TheBloke/Tiamat-7B-AWQ) (You're the best!)
## Model details
Ever wanted to be treated disdainfully like the foolish mortal you are? Wait no more, for Tiamat is here to berate you! Hailing from the world of the Forgotten Realms, she will happily judge your every word.
Tiamat was created with the following question in mind; Is it possible to create an assistant with strong anti-assistant personality traits? Try it yourself and tell me afterwards!
She was fine-tuned on top of Teknium's excellent [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and can be summoned to you using the following system message;
```
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.
```
Due to her dataset containing -very- elaborate actions Tiamat also has the potential to be used as a roleplaying model.
## Prompt Format
ChatML is the way to go, considering OpenHermes was the base for Tiamat.
```
<|im_start|>system
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.<|im_end|>
<|im_start|>user
Greetings, mighty Tiamat. I seek your guidance.<|im_end|>
<|im_start|>assistant
```
|
LoneStriker/shisa-7b-v1-5.0bpw-h6-exl2
|
LoneStriker
| 2023-12-07T18:54:27Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ja",
"en",
"dataset:augmxnt/ultra-orca-boros-en-ja-v1",
"dataset:Open-Orca/SlimOrca",
"dataset:augmxnt/shisa-en-ja-dpo-v1",
"arxiv:2310.05914",
"arxiv:2305.18290",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T18:07:21Z |
---
license: apache-2.0
language:
- ja
- en
datasets:
- augmxnt/ultra-orca-boros-en-ja-v1
- Open-Orca/SlimOrca
- augmxnt/shisa-en-ja-dpo-v1
---
# Shisa 7B

**Shisa 7B** (`shisa-7b-v1`) is a bilingual Japanese and English (JA/EN) general-purpose chat model that aims to achieve strong Japanese language performance while retaining robust English capabilities, using a synthetic-data driven approach.
This model is based on [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) with a custom JA-optimized extended tokenizer that is >2X more efficient in Japanese than Mistral's original tokenizer. The base model was pre-trained for an additional 8B primarily Japanese tokens. It was then subsequently fine-tuned with an expanded, machine-translated version of [airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1), a set of the highest-scoring items from [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized), and additional freshly generated [airoboros](https://github.com/jondurbin/airoboros) data directly to the target languages.
We also release our base model, datasets, and pipeline code under a permissive Apache 2.0 license which can be used for any purpose, commercial or otherwise:
* [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - our base model w/ an extended tokenizer and additional JA pre-training
* [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - our pre-training data set
* [ultra-orca-boros-en-ja](https://huggingface.co/datasets/augmxnt/ultra-orca-boros-en-ja-v1) - a synthetically generated, machine-translated, programmatically validated JA/EN fine-tuning dataset
* [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - Small subset of DPO pairs from ultrafeedback, along with JA DPO pairs using GPT-4 generated items as the chosen value, and outputs from our preliminary 7b model as the rejected values
* [Shisa repository](https://github.com/AUGMXNT/shisa) - this includes our translation, dataset generation, training, and evaluation code
Moreover, we are in the process of publishing extended writeups and more details of our process, including ablation results, testing methodology, and key findings [on our project wiki](https://github.com/AUGMXNT/shisa/wiki) that may be of interest to fellow researchers.
## Fine-Tuning
Our original intuition was to see if we could create a stronger Japanese model using the best [existing public JA training sets](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets) and incorporating them. After initial review and testing, however, we decided that focusing solely on translation/generation of our own synthetic datasets could yield superior results with less training.
We compared multiple translation tools and, via manual review, judged that while `gpt-4` almost always delivered the highest quality translations, Google's `text-bison-32k` was a good balance of quality, cost and throughput. Over various iterations, we refined our translation approach to include some additional algorithms for flagging and filtering invalid translations, re-translating and backfilling as necessary.
We also took this project as an opportunity to apply some newer techniques such as incorporating [NEFTune](https://arxiv.org/abs/2310.05914) and [DPO](https://arxiv.org/abs/2305.18290) training.
For our v1 release, we picked from our release candidates based on a significant amount of human preference testing (thousands of generations and multiple rounds of pairwise comparisons). We analyzed our results with both win/loss/draw and [BTL modeling](https://datascience.oneoffcoder.com/btl-model.html) (iLSR) using [choix](https://github.com/lucasmaystre/choix)).
The best candidate model was fine-tuned in a 3-step process:
1. First, the model was fine-tuned on `ultra-orca-boros-en-ja` and SlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview))
2. Next, we add one additional epoch at performed using only a subset of Japanese ultra-orca-boros-en-ja items to enhance JA performance (as SlimOrca from the first step is mostly EN) ([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview))
3. Finally, the model was tuned using a DPOTrainer on a small subset of ultrafeedback (EN) and our own JA DPO dataset which uses gpt-4 outputs as the chosen values and outputs from stage 1's prelim model as rejected values. ([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) )
During our training process, we also gained some key insights on [why some existing Japanese models seem to underperform](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets#analysis) even versus models that have no additional JA training, and we hope that sharing this analysis will be useful to other teams developing Japanese language models.
While we need to explore this further, as an experimental validation, we applied a version of our fine-tuning set onto an existing base model ("Gamma 7B") and the initial JA MT-Bench results suggests that we can drastically increase functional performance with our tuning approach:
| Model | Score |
| ------------------------------ | ----- |
| shisa-gamma-7b-allsources-v0.4 | 5.65 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
## Performance
Throughout our training, we did extensive human evaluation for each model to cross-validate our model performance, and we are currently conducting ongoing larger scale manual head-to-head testing between models. Our intention is open up and scale this data collection as we further develop our tools. For more information and updates, please see our [project wiki](https://github.com/AUGMXNT/shisa/wiki).
While we believe [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) is a useful metric for our [base model](https://huggingface.co/augmxnt/shisa-base-7b-v1), and it was extremely useful during our tuning process for initial validations, as our fine-tune training includes a percentage of the benchmark train splits, we provide these llm-jp-eval results primarily as a point of interest:
| AVR | MC | NLI | QA | RC |
|-------|-------|-------|-------|-------|
| 0.7480| 0.8900| 0.8040| 0.4153| 0.8825|
*(We run a [slightly modified llm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) to support testing of Qwen and to emit a `bos_token` if available)*
For our final model, since it's customary to include benchmarks, we've used Stability AI Japan's [Japanese MT-Bench](https://github.com/Stability-AI/FastChat) as a more representative test of our model's capabilities. For [our JA MT-Bench testing](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable) we use a Japanese prompt ("ããªãã¯åœ¹ç«ã€ã¢ã·ã¹ã¿ã³ãã§ãã") as well as `--num-choices 4` in an effort to reduce sampling variability, however we've still observed regular 0.5+ point (and sometimes even greater swings) between generations, as well as issues with default prompts and parameters when testing, so again, we'd urge caution in over-interpreting these scores and treating them as more of a probabilistic directional indicator, rather than a definitive score or ranking:
| Benchmark | Score |
| ----------- | ----- |
| JA MT-Bench | 5.02 |
| MT-Bench | 5.71 |
There is an [MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard), but as JA MT-Bench is still under development, for convenience, here is a comparison of the JA MT-Bench scores of some other models (our scores were rated by `gpt-4-0613`):
| Model | Score |
| ------------------------------------------------- | ---- |
| gpt-4-0613 | 9.40 |
| gpt-4-1106-preview | 9.17 |
| gpt-3.5-turbo* | 8.41 |
| Qwen-14B-Chat | 7.47 |
| **shisa-7b-v1** | **5.02** |
| ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
| japanese-stablelm-instruct-alpha-7b* | 2.74 |
| Mistral-7B-OpenOrca-ja* | 2.23 |
| youri-7b-chat* | 2.00 |
| Mistral-7B-Instruct-v0.1* | 1.78 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 |
*(Marked JA MT-Bench results in this section are [sourced from shi3z](https://note.com/shi3zblog/n/n6b2ac5874021))*
## Limitations
Although our model demonstrates a reasonably high level of Japanese fluency, as a 7B parameter model, it is prone to higher hallucination rates and less effective instruction following and reasoning than larger-class models. Also, it still does not have complete mastery of the Japanese language and a native speaker will spot occasional mistakes like some non-idiomatic/awkward phrasing, improper tenses/speech levels, etc.
We've also noticed a small amount of language leakage, likely largely attributable to our tokenizer expansion. These may be fixable with sampler settings like [Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)) or additional targeted training, and we plan on doing additional work on automated detection/sampler sweeps in the future. One interesting observation is, based on our data collection, we found that as we iterated, the DPO process significantly exacerbated this issue, but also that our DPO models still had significantly higher human preference rates, so there was a bit of a trade-off in our choice of final tune.
While we believe that training larger models can improve performance using our existing approach and dataset, there are also many improvements we'd like to make for future models. We believe there is quite a bit of low hanging fruit for improving performance with even more training efficiency largely through improving the quality and construction of datasets.
## Usage
Sample code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
model_name = "augmxnt/shisa-7b-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
device_map="auto"
)
streamer = TextStreamer(tokenizer, skip_prompt=True)
# The prompt template is included in the model's tokenizer_config.json so you shouldn't need this but we've included this for convenience
# tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n"
# A more typical prompt: ããªãã¯åœ¹ã«ç«ã€ã¢ã·ã¹ã¿ã³ãã§ãã("You are a helpful assistant.")
# You are an avid Pokemon fanatic.
prompt = "ããªãã¯ç±ççãªãã±ã¢ã³ãã¡ã³ã§ãã"
chat = [{"role": "system", "content": prompt}]
# Who is the most powerful Pokemon? Explain your choice.
user_input = "æåŒ·ã®ãã±ã¢ã³ã¯èª°ã§ããïŒãã®éžæçç±ã説æããŠãã ããã"
chat.append({"role": "user", "content": user_input})
# Generate - add_generation_prompt to make sure it continues as assistant
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt")
# For multi-GPU, find the device of the first parameter of the model
first_param_device = next(model.parameters()).device
inputs = inputs.to(first_param_device)
with torch.no_grad():
outputs = model.generate(
inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=1000,
temperature=0.7,
repetition_penalty=1.05,
top_p=0.95,
do_sample=True,
streamer=streamer,
)
# Add just the new tokens to our chat
new_tokens = outputs[0, inputs.size(1):]
response = tokenizer.decode(new_tokens, skip_special_tokens=True)
chat.append({"role": "assistant", "content": response})
```
## Prompt format
The prompt format is llama-2 chat:
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
This [prompt template](https://huggingface.co/docs/transformers/main/chat_templating) is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1')
chat = [
{"role": "system", "content": "You are Aiko, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
**NOTE:** For proper responses, you should be using our `bos_token` (`<s>`) to begin a string. This is automatically generated by `tokenizer.encode()` but if you are crafting a custom template or using an encoding method that skips special tokens, you may have to add this yourself.
## Acknowledgements
Team: [Leonard Lin](https://huggingface.co/leonardlin) and [Jon Durbin](https://huggingface.co/jondurbin), Mariko Sato, and Florian von Bock
Compute for this model was generously sponsored by [AKA Virtual](https://akavirtual.com/) (Tokyo, Japan).
Thanks to the [LLM-jp](https://llm-jp.nii.ac.jp/), [Stability AI Japan](https://ja.stability.ai/), and [LMSYS](https://lmsys.org/) teams for their work on llm-jp-eval, Japanese MT-Bench, MT-Bench.
Also, thanks to all the volunteers that provided invaluable human preference testing!
We are actively looking for additional compute as we train better and larger models for this project. Please drop us a line at: *compute at augmxnt dot com*
---
*(GPT-4ã«ãã£ãŠéåžžã«è»œåŸ®ãªç·šéãå ããŠç¿»èš³ãããŸããïŒ*
# ã·ãŒãµãŒ7B
**ã·ãŒãµãŒ7B**ïŒ`shisa-7b-v1`ïŒã¯ãåæããŒã¿é§åã®ã¢ãããŒããçšããŠãåªããæ¥æ¬èªãšè±èªèœåãäž¡ç«ããããšãç®æããã€ãªã³ã¬ã«ïŒæ¥æ¬èª/è±èªïŒæ±çšãã£ããã¢ãã«ã§ãã
ãã®ã¢ãã«ã¯ã[Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)ãåºã«ãMistralã®ãªãªãžãã«ã®ããŒã¯ãã€ã¶ãŒãããæ¥æ¬èªã«ãããŠ2å以äžå¹ççãªãæ¥æ¬èªæé©åæ¡åŒµããŒã¯ãã€ã¶ãŒãã«ã¹ã¿ã ããŠäœæãããŸãããããŒã¹ã¢ãã«ã¯ãäž»ã«æ¥æ¬èªã®ããŒã¯ã³ã远å ã§80åãã®ãã¬ãŒãã³ã°ãè¡ããŸããããããŠããã®åŸã[airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1)ã®æ¡åŒµãããæ©æ¢°ç¿»èš³çã[ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)ããã®æé«åŸç¹é
ç®ã®ã»ããããããŠæ°ãã«çæããã[airoboros](https://github.com/jondurbin/airoboros)ã®ããŒã¿ãçŽæ¥ç®æšèšèªã§åŸ®èª¿æŽããŠããŸãã
åçšãå«ãããããç®çã§äœ¿çšå¯èœãªå¯å®¹ãªApache 2.0ã©ã€ã»ã³ã¹ã®äžã§ãããŒã¹ã¢ãã«ãããŒã¿ã»ãããããã³ãã€ãã©ã€ã³ã³ãŒããå
¬éããŠããŸãïŒ
* [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - æ¡åŒµããŒã¯ãã€ã¶ãŒãšè¿œå ã®æ¥æ¬èªãã¬ãã¬ãŒãã³ã°ãåããåœæ¹ã®ããŒã¹ã¢ãã«
* [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - åœæ¹ã®ãã¬ãã¬ãŒãã³ã°ããŒã¿ã»ãã
* [ultra-orca-boros-en-ja](https://huggingface.co/datasets/jondurbin/ultra-orca-boros-en-ja) - åæçæãæ©æ¢°ç¿»èš³ãããã°ã©ã ã«ããæ€èšŒã«ããJA/EN埮調æŽããŒã¿ã»ãã
* [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - ultrafeedbackããã®DPOãã¢ã®å°ããªãµãã»ãããšãéžæãããå€ãšããŠGPT-4çæé
ç®ã䜿çšããæ¥æ¬èªã®DPOãã¢ããããŠåæã®7ããªãªã³ã¢ãã«ã®åºåãåŽäžããå€
* [ã·ãŒãµãŒãªããžããª](https://github.com/AUGMXNT/shisa) - 翻蚳ãããŒã¿ã»ããã®çæããã¬ãŒãã³ã°ãè©äŸ¡ã³ãŒããªã©ãå«ãŸããŠããŸã
ããã«ãã¢ãã¬ãŒã·ã§ã³çµæããã¹ãæ¹æ³è«ãäž»èŠãªèª¿æ»çµæãªã©ãããã»ã¹ã®è©³çŽ°ãæ¡åŒµã©ã€ãã¢ãããå
¬éããéçšã«ãããŸããããã¯[åœãããžã§ã¯ãwiki](https://github.com/AUGMXNT/shisa/wiki)ã§ç ç©¶è
ã«è峿·±ãæ
å ±ãšããŠæäŸãããŠããŸãã
## 埮調æŽ
æåã®çŽæã¯ãæè¯ã®[æ¢åã®å
¬éæ¥æ¬èªãã¬ãŒãã³ã°ã»ãã](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets)ã䜿çšããŠãããããçµã¿å
¥ããããšã§ãã匷åãªæ¥æ¬èªã¢ãã«ãäœæã§ãããã©ãããèŠãããšã§ãããããããåæã®æ€èšãšãã¹ãã®åŸãèªãã®åæããŒã¿ã»ããã®ç¿»èš³/çæã«ã ãçŠç¹ãåœãŠãããšã§ãçæéã®ãã¬ãŒãã³ã°ã§åªããçµæãåŸãããšãã§ãããšçµè«ä»ããŸããã
ç§ãã¡ã¯è€æ°ã®ç¿»èš³ããŒã«ãæ¯èŒããæåã§ã¬ãã¥ãŒãè¡ã£ãçµæã`gpt-4`ãã»ãŒåžžã«æé«å質ã®ç¿»èš³ãæäŸããªãããGoogleã® `text-bison-32k`ã¯å質ãã³ã¹ããã¹ã«ãŒãããã®ãã©ã³ã¹ãè¯ããšå€æããŸãããè€æ°ã®ç¹°ãè¿ããçµãŠãç¡å¹ãªç¿»èš³ã®ãã©ã°ä»ããšãã£ã«ã¿ãªã³ã°ãå¿
èŠã«å¿ããå翻蚳ãšããã¯ãã£ã«ã®ããã®è¿œå ã®ã¢ã«ãŽãªãºã ãå«ãããã«ã翻蚳ã¢ãããŒããæŽç·ŽãããŸããã
ãŸãããã®ãããžã§ã¯ãã[NEFTune](https://arxiv.org/abs/2310.05914)ãš[DPO](https://arxiv.org/abs/2305.18290)ãã¬ãŒãã³ã°ãåãå
¥ãããªã©ãæ°ããæè¡ãé©çšããæ©äŒãšããªããŸããã
v1ãªãªãŒã¹ã®ããã«ãç§ãã¡ã¯å€§éã®äººéã®å奜ãã¹ãïŒæ°åã®çæãšè€æ°ã©ãŠã³ãã®ãã¢ã¯ã€ãºæ¯èŒïŒã«åºã¥ããŠãªãªãŒã¹åè£ããéžæããŸãããç§ãã¡ã¯ãåã¡/è² ã/åŒãåããšã[BTLã¢ãã«](https://datascience.oneoffcoder.com/btl-model.html)ïŒiLSRïŒã䜿çšããŠ[choix](https://github.com/lucasmaystre/choix)ã§çµæãåæããŸããã
æè¯ã®åè£ã¢ãã«ã¯ã3ã¹ãããã®ããã»ã¹ã§åŸ®èª¿æŽãããŸããïŒ
1. æåã«ãã¢ãã«ã¯`ultra-orca-boros-en-ja`ãšSlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview))ã§åŸ®èª¿æŽãããŸããã
2. 次ã«ãæ¥æ¬èªã®ããã©ãŒãã³ã¹ãåäžãããããã«ultra-orca-boros-en-jaã®äžéšã䜿çšããŠ1å远å ã®ãšããã¯ã远å ããŸããïŒæåã®æ®µéã®SlimOrcaã¯äž»ã«è±èªïŒ([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview))ã
3. æåŸã«ãã¢ãã«ã¯å°èŠæš¡ã®ultrafeedbackïŒè±èªïŒãšèªèº«ã®JA DPOããŒã¿ã»ããã«å¯ŸããŠDPOTrainerã䜿çšããŠèª¿æŽãããŸãããããã§äœ¿çšããJA DPOããŒã¿ã»ããã¯gpt-4ã®åºåãéžåºãããå€ãšããã¹ããŒãž1ã®äºåã¢ãã«ã®åºåãåŽäžããå€ãšããŸãã([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) )
ç§ãã¡ã®ãã¬ãŒãã³ã°ããã»ã¹äžã«ãäœæ
äžéšã®æ¢åã®æ¥æ¬èªã¢ãã«ãã远å ã®æ¥æ¬èªãã¬ãŒãã³ã°ããªãã¢ãã«ã«å¯ŸããŠãããã©ãŒãã³ã¹ãäœãã®ãããšããããã€ãã®éèŠãªæŽå¯ãåŸãããšãã§ããŸããããã®åæçµæãå
±æããã°ãä»ã®ããŒã ãæ¥æ¬èªã¢ãã«ãéçºããéã®åèã«ãªããšæããŸãã
ããã«æ¢æ±ããå¿
èŠã¯ãããŸãããå®éšçãªæ€èšŒãšããŠã埮調æŽã»ããã®ããŒãžã§ã³ãæ¢åã®ããŒã¹ã¢ãã«ïŒ"Gamma 7B"ïŒã«é©çšããåæã®JA MT-Benchçµæã瀺ãããã«ãç§ãã¡ã®ãã¥ãŒãã³ã°ã¢ãããŒãã§æ©èœæ§ã®ããã©ãŒãã³ã¹ãåçã«åäžãããããšãã§ããŸããïŒ
| ã¢ãã« | ã¹ã³ã¢ |
| ------------------------------ | ----- |
| shisa-gamma-7b-allsources-v0.4 | 5.65 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
## ããã©ãŒãã³ã¹
ãã¬ãŒãã³ã°å
šäœãéããŠãåã¢ãã«ã«ã€ããŠäººéã«ããè©äŸ¡ãè¡ããã¢ãã«ã®ããã©ãŒãã³ã¹ãçžäºã«æ€èšŒããŸãããçŸåšãã¢ãã«éã®æåã§ã®æ¯èŒãã¹ããå€§èŠæš¡ã«è¡ã£ãŠããŸããç§ãã¡ã®ç®æããšããã¯ãããŒã«ãããã«çºå±ãããããšã§ãã®ããŒã¿åéãå
¬éããŠæ¡åŒµããããšã§ããè©³çŽ°ãšæŽæ°æ
å ±ã«ã€ããŠã¯ã[ãããžã§ã¯ãwiki](https://github.com/AUGMXNT/shisa/wiki) ãã芧ãã ããã
æã
ã¯ã[llm-jp-eval](https://github.com/llm-jp/llm-jp-eval)ã¯ãç§ãã¡ã®[åºæ¬ã¢ãã«](https://huggingface.co/augmxnt/shisa-base-7b-v1)ã®æçšãªææšã§ãããåæã®æ€èšŒã®ããã®åŸ®èª¿æŽããã»ã¹äžã«éåžžã«åœ¹ç«ã€ãšèããŠããŸããã埮調æŽãã¬ãŒãã³ã°ã«ã¯ãã³ãããŒã¯ã®ãã¬ã€ã³åå²ã®äžéšãå«ãŸããŠãããããç§ãã¡ãæäŸããllm-jp-evalã®çµæã¯äž»ã«è峿·±ããã€ã³ããšããŠæäŸããŠããŸãïŒ
| AVR | MC | NLI | QA | RC |
|-------|-------|-------|-------|-------|
| 0.7480| 0.8900| 0.8040| 0.4153| 0.8825|
*(Qwenã®ãã¹ãããµããŒãããå¯èœã§ããã°`bos_token`ãçºè¡ããããã«ã[ãããã«ä¿®æ£ããllm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) ãå®è¡ããŠããŸã)*
æçµã¢ãã«ã«ã€ããŠã¯ããã³ãããŒã¯ãå«ããã®ãäžè¬çãªãããç§ãã¡ã®ã¢ãã«ã®èœåããã代衚çã«ãã¹ãããããã«ãStability AI Japanã®[Japanese MT-Bench](https://github.com/Stability-AI/FastChat)ã䜿çšããŸããã[ç§ãã¡ã®JA MT-Bench ãã¹ã](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable)ã§ã¯ããµã³ããªã³ã°å€åãæžããããã«ãæ¥æ¬èªã®ããã³ããïŒ"ããªãã¯åœ¹ç«ã€ã¢ã·ã¹ã¿ã³ãã§ãã"ïŒãš `--num-choices 4`ã䜿çšããŠããŸãããçæéã§0.5+ç¹ïŒæã«ã¯ãã以äžã®å€åïŒãé »ç¹ã«èгå¯ãããã¹ãæã®ããã©ã«ãã®ããã³ãããšãã©ã¡ãŒã¿ã«åé¡ããã£ããšããçµéšããããããã®ã¹ã³ã¢ãé床ã«è§£éããããšã«ã¯æ³šæãå¿
èŠã§ããããã確å®çãªã¹ã³ã¢ãã©ã³ãã³ã°ã§ã¯ãªãããã確ççãªæ¹åææšãšããŠæ±ãããšããå§ãããŸãïŒ
| ãã³ãããŒã¯ | ã¹ã³ã¢ |
| ----------- | ----- |
| JA MT-Bench | 5.02 |
| MT-Bench | 5.71 |
[MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)ããããŸãããJA MT-Benchã¯ãŸã éçºäžã§ããããã䟿å®äžãä»ã®ã¢ãã«ã®JA MT-Benchã¹ã³ã¢ãšã®æ¯èŒã瀺ããŸãïŒç§ãã¡ã®ã¹ã³ã¢ã¯`gpt-4-0613`ã«ãã£ãŠè©äŸ¡ãããŸããïŒïŒ
| ã¢ãã« | ã¹ã³ã¢ |
| ------------------------------------------------- | ---- |
| gpt-4-0613 | 9.40 |
| gpt-4-1106-preview | 9.17 |
| gpt-3.5-turbo* | 8.41 |
| Qwen-14B-Chat | 7.47 |
| **shisa-7b-v1** | **5.02** |
| ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
| japanese-stablelm-instruct-alpha-7b* | 2.74 |
| Mistral-7B-OpenOrca-ja* | 2.23 |
| youri-7b-chat* | 2.00 |
| Mistral-7B-Instruct-v0.1* | 1.78 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 |
*(ãã®ã»ã¯ã·ã§ã³ã§ããŒã¯ãããJA MT-Benchã®çµæã¯[shi3zããåŒçš](https://note.com/shi3zblog/n/n6b2ac5874021)ããŸãã)*
## å¶éäºé
åœã¢ãã«ã¯ååãªæ¥æ¬èªã®æµæ¢ãã瀺ããŠããŸããã7Bãã©ã¡ãŒã¿ã®ã¢ãã«ãšããŠã¯ããã倧ããªã¯ã©ã¹ã®ã¢ãã«ã«æ¯ã¹ãŠå¹»èŠçãé«ããæç€ºã®è¿œè·¡ãæšè«ã广çã§ãªãåŸåããããŸãããŸããæ¥æ¬èªã®å®å
šãªç¿åŸã¯ãŸã éããŠãããããã€ãã£ãã¹ããŒã«ãŒã¯ããŸã«éæ
£çšç/éåæã®ãã衚çŸãäžé©åãªæå¶/話ãèšèã®ã¬ãã«ãªã©ã®ééããèŠã€ããããšããããŸãã
ãŸããç§ãã¡ã®ããŒã¯ãã€ã¶ãŒã®æ¡åŒµã«å€§ãã«èµ·å ããå¯èœæ§ãé«ãããããããªèšèªãªãŒã¯ã確èªããŠããŸãããããã¯[Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)ãªã©ã®ãµã³ãã©ãŒèšå®ã远å ã®ã¿ãŒã²ããæååãã¬ãŒãã³ã°ã§ä¿®æ£å¯èœãªå¯èœæ§ããããä»åŸãèªåæ€åº/ãµã³ãã©ãŒã®ã¹ãŠã£ãŒãã«ã€ããŠè¿œå ã®äœæ¥ãè¡ãäºå®ã§ããè峿·±ã芳å¯ãšããŠã¯ãç§ãã¡ã®ããŒã¿åéã«åºã¥ããŠãDPOããã»ã¹ããã®åé¡ã倧å¹
ã«æªåãããããšãããããŸããããããã§ãDPOã¢ãã«ã¯äººéã®å¥œã¿çã倧å¹
ã«é«ãã£ããããæçµçãªåŸ®èª¿æŽã®éžæã«ã¯äžå®ã®ãã¬ãŒããªãããããŸããã
çŸåããã¢ãããŒããšããŒã¿ã»ããã䜿çšããŠãå€§èŠæš¡ãªã¢ãã«ã®ãã¬ãŒãã³ã°ãããã©ãŒãã³ã¹ãåäžããããšä¿¡ããŠããŸãããä»åŸã®ã¢ãã«åãã«è¡ãããæ¹è¯ãå€ããããŸããç§ãã¡ã¯ãããŒã¿ã»ããã®åè³ªãšæ§ç¯ãæ¹åããããšã§ããããªããã¬ãŒãã³ã°å¹çãéããããã©ãŒãã³ã¹åäžã«ã¯ãŸã çžåœã«åãçµãäœå°ããããšèããŠããŸãã
## äœ¿çšæ³
ãµã³ãã«ã³ãŒã:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
model_name = "augmxnt/shisa-7b-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
device_map="auto"
)
streamer = TextStreamer(tokenizer, skip_prompt=True)
# ããã³ãããã³ãã¬ãŒãã¯ã¢ãã«ã®tokenizer_config.jsonã«å«ãŸããŠããã®ã§ãããã¯å¿
èŠãªãã¯ãã§ããã䟿å®äžãã¡ãã«ãæ²èŒããŠããŸã
# tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n"
# ããå
žåçãªããã³ãã: ããªãã¯åœ¹ã«ç«ã€ã¢ã·ã¹ã¿ã³ãã§ãã
# You are an avid Pokemon fanatic.
prompt = "ããªãã¯ç±ççãªãã±ã¢ã³ãã¡ã³ã§ãã"
chat = [{"role": "system", "content": prompt}]
# Who is the most powerful Pokemon? Explain your choice.
user_input = "æåŒ·ã®ãã±ã¢ã³ã¯èª°ã§ããïŒãã®éžæçç±ã説æããŠãã ããã"
chat.append({"role": "user", "content": user_input})
# çæ - add_generation_promptã远å ããŠã¢ã·ã¹ã¿ã³ããšããŠç¶è¡ããããšã確èªããŸã
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt")
# è€æ°ã®GPUã®å Žåãã¢ãã«ã®æåã®ãã©ã¡ãŒã¿ã®ããã€ã¹ãèŠã€ããŸã
first_param_device = next(model.parameters()).device
inputs = inputs.to(first_param_device)
with torch.no_grad():
outputs = model.generate(
inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=1000,
temperature=0.7,
repetition_penalty=1.05,
top_p=0.95,
do_sample=True,
streamer=streamer,
)
# Add just the new tokens to our chat
new_tokens = outputs[0, inputs.size(1):]
response = tokenizer.decode(new_tokens, skip_special_tokens=True)
chat.append({"role": "assistant", "content": response})
```
## ããã³ãã圢åŒ
ããã³ãã圢åŒã¯llama-2 chatã§ãïŒ
```
[INST] <<SYS>>
ããªãã¯åœ¹ç«ã€ãåèŠããªããæ€é²ãããŠããªãã¢ã·ã¹ã¿ã³ãã§ãã
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
ããªãã¯åœ¹ç«ã€ãåèŠããªããæ€é²ãããŠããªãã¢ã·ã¹ã¿ã³ãã§ãã
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
ãã®[prompt template](https://huggingface.co/docs/transformers/main/chat_templating)ã¯ããŒã¯ãã€ã¶ã®èšå®ã«å«ãŸããŠãããHuggingFace ã®ããŒã¯ãã€ã¶ `apply_chat_template` ã¡ãœããã䜿çšã§ããŸããäŸãã°ïŒ
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1')
chat = [
{"role": "system", "content": "ããªãã¯Aikoããã¬ã³ããªãŒãªAIã¢ã·ã¹ã¿ã³ãã§ãã"},
{"role": "user", "content": "ããã«ã¡ã¯ã調åã¯ã©ãã§ããïŒ"},
{"role": "assistant", "content": "å
æ°ã§ãã仿¥ã¯äœã®ãæäŒããã§ããŸããïŒ"},
{"role": "user", "content": "ãã£ãããã³ãã¬ãŒãã£ã³ã°ã®ä»çµã¿ãèŠããŠãããããã§ãïŒ"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
**泚æ**é©åãªã¬ã¹ãã³ã¹ãåŸãããã«ã¯ãæååã®éå§ã«æã
ã® `bos_token` (`<s>`) ã䜿çšãã¹ãã§ãããã㯠`tokenizer.encode()` ã«ãã£ãŠèªåçã«çæãããŸãããã«ã¹ã¿ã ãã³ãã¬ãŒããäœæããããç¹æ®ããŒã¯ã³ãçç¥ãããšã³ã³ãŒãæ¹æ³ã䜿çšããå Žåã¯ãèªåã§è¿œå ããå¿
èŠããããŸãã
## è¬èŸ
ããŒã ïŒ[Leonard Lin](https://huggingface.co/leonardlin)ã[Jon Durbin](https://huggingface.co/jondurbin)ãäœè€ççåãFlorian von Bock
ãã®ã¢ãã«ã®èšç®ã¯ã[AKA Virtual](https://akavirtual.com/) (æ±äº¬ãæ¥æ¬) ã®ãåæã«ããæäŸãããŠããŸãã
[LLM-jp](https://llm-jp.nii.ac.jp/)ã[Stability AI Japan](https://ja.stability.ai/)ã[LMSYS](https://lmsys.org/)ã®ããŒã ããllm-jp-eval, Japanese MT-Bench, MT-Benchã«åãçµãã§ãããŠæè¬ããŠããŸãã
ãŸãã貎éãªãã¥ãŒãã³ããªãã¡ã¬ã³ã¹ãã¹ããæäŸããŠãã ãã£ããã¹ãŠã®ãã©ã³ãã£ã¢ã«ãæè¬ããããŸãïŒ
ãã®ãããžã§ã¯ãã®ããã«ããè¯ãããã倧ããªã¢ãã«ãèšç·Žããããã«ã远å ã®èšç®ãç©æ¥µçã«æ¢ããŠããŸãããåãåããã¯æ¬¡ã®å®å
ãŸã§ãé¡ãããããŸãïŒ*compute at augmxnt dot com*
|
LoneStriker/shisa-7b-v1-4.0bpw-h6-exl2
|
LoneStriker
| 2023-12-07T18:54:26Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ja",
"en",
"dataset:augmxnt/ultra-orca-boros-en-ja-v1",
"dataset:Open-Orca/SlimOrca",
"dataset:augmxnt/shisa-en-ja-dpo-v1",
"arxiv:2310.05914",
"arxiv:2305.18290",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T17:59:51Z |
---
license: apache-2.0
language:
- ja
- en
datasets:
- augmxnt/ultra-orca-boros-en-ja-v1
- Open-Orca/SlimOrca
- augmxnt/shisa-en-ja-dpo-v1
---
# Shisa 7B

**Shisa 7B** (`shisa-7b-v1`) is a bilingual Japanese and English (JA/EN) general-purpose chat model that aims to achieve strong Japanese language performance while retaining robust English capabilities, using a synthetic-data driven approach.
This model is based on [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) with a custom JA-optimized extended tokenizer that is >2X more efficient in Japanese than Mistral's original tokenizer. The base model was pre-trained for an additional 8B primarily Japanese tokens. It was then subsequently fine-tuned with an expanded, machine-translated version of [airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1), a set of the highest-scoring items from [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized), and additional freshly generated [airoboros](https://github.com/jondurbin/airoboros) data directly to the target languages.
We also release our base model, datasets, and pipeline code under a permissive Apache 2.0 license which can be used for any purpose, commercial or otherwise:
* [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - our base model w/ an extended tokenizer and additional JA pre-training
* [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - our pre-training data set
* [ultra-orca-boros-en-ja](https://huggingface.co/datasets/augmxnt/ultra-orca-boros-en-ja-v1) - a synthetically generated, machine-translated, programmatically validated JA/EN fine-tuning dataset
* [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - Small subset of DPO pairs from ultrafeedback, along with JA DPO pairs using GPT-4 generated items as the chosen value, and outputs from our preliminary 7b model as the rejected values
* [Shisa repository](https://github.com/AUGMXNT/shisa) - this includes our translation, dataset generation, training, and evaluation code
Moreover, we are in the process of publishing extended writeups and more details of our process, including ablation results, testing methodology, and key findings [on our project wiki](https://github.com/AUGMXNT/shisa/wiki) that may be of interest to fellow researchers.
## Fine-Tuning
Our original intuition was to see if we could create a stronger Japanese model using the best [existing public JA training sets](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets) and incorporating them. After initial review and testing, however, we decided that focusing solely on translation/generation of our own synthetic datasets could yield superior results with less training.
We compared multiple translation tools and, via manual review, judged that while `gpt-4` almost always delivered the highest quality translations, Google's `text-bison-32k` was a good balance of quality, cost and throughput. Over various iterations, we refined our translation approach to include some additional algorithms for flagging and filtering invalid translations, re-translating and backfilling as necessary.
We also took this project as an opportunity to apply some newer techniques such as incorporating [NEFTune](https://arxiv.org/abs/2310.05914) and [DPO](https://arxiv.org/abs/2305.18290) training.
For our v1 release, we picked from our release candidates based on a significant amount of human preference testing (thousands of generations and multiple rounds of pairwise comparisons). We analyzed our results with both win/loss/draw and [BTL modeling](https://datascience.oneoffcoder.com/btl-model.html) (iLSR) using [choix](https://github.com/lucasmaystre/choix)).
The best candidate model was fine-tuned in a 3-step process:
1. First, the model was fine-tuned on `ultra-orca-boros-en-ja` and SlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview))
2. Next, we add one additional epoch at performed using only a subset of Japanese ultra-orca-boros-en-ja items to enhance JA performance (as SlimOrca from the first step is mostly EN) ([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview))
3. Finally, the model was tuned using a DPOTrainer on a small subset of ultrafeedback (EN) and our own JA DPO dataset which uses gpt-4 outputs as the chosen values and outputs from stage 1's prelim model as rejected values. ([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) )
During our training process, we also gained some key insights on [why some existing Japanese models seem to underperform](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets#analysis) even versus models that have no additional JA training, and we hope that sharing this analysis will be useful to other teams developing Japanese language models.
While we need to explore this further, as an experimental validation, we applied a version of our fine-tuning set onto an existing base model ("Gamma 7B") and the initial JA MT-Bench results suggests that we can drastically increase functional performance with our tuning approach:
| Model | Score |
| ------------------------------ | ----- |
| shisa-gamma-7b-allsources-v0.4 | 5.65 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
## Performance
Throughout our training, we did extensive human evaluation for each model to cross-validate our model performance, and we are currently conducting ongoing larger scale manual head-to-head testing between models. Our intention is open up and scale this data collection as we further develop our tools. For more information and updates, please see our [project wiki](https://github.com/AUGMXNT/shisa/wiki).
While we believe [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) is a useful metric for our [base model](https://huggingface.co/augmxnt/shisa-base-7b-v1), and it was extremely useful during our tuning process for initial validations, as our fine-tune training includes a percentage of the benchmark train splits, we provide these llm-jp-eval results primarily as a point of interest:
| AVR | MC | NLI | QA | RC |
|-------|-------|-------|-------|-------|
| 0.7480| 0.8900| 0.8040| 0.4153| 0.8825|
*(We run a [slightly modified llm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) to support testing of Qwen and to emit a `bos_token` if available)*
For our final model, since it's customary to include benchmarks, we've used Stability AI Japan's [Japanese MT-Bench](https://github.com/Stability-AI/FastChat) as a more representative test of our model's capabilities. For [our JA MT-Bench testing](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable) we use a Japanese prompt ("ããªãã¯åœ¹ç«ã€ã¢ã·ã¹ã¿ã³ãã§ãã") as well as `--num-choices 4` in an effort to reduce sampling variability, however we've still observed regular 0.5+ point (and sometimes even greater swings) between generations, as well as issues with default prompts and parameters when testing, so again, we'd urge caution in over-interpreting these scores and treating them as more of a probabilistic directional indicator, rather than a definitive score or ranking:
| Benchmark | Score |
| ----------- | ----- |
| JA MT-Bench | 5.02 |
| MT-Bench | 5.71 |
There is an [MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard), but as JA MT-Bench is still under development, for convenience, here is a comparison of the JA MT-Bench scores of some other models (our scores were rated by `gpt-4-0613`):
| Model | Score |
| ------------------------------------------------- | ---- |
| gpt-4-0613 | 9.40 |
| gpt-4-1106-preview | 9.17 |
| gpt-3.5-turbo* | 8.41 |
| Qwen-14B-Chat | 7.47 |
| **shisa-7b-v1** | **5.02** |
| ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
| japanese-stablelm-instruct-alpha-7b* | 2.74 |
| Mistral-7B-OpenOrca-ja* | 2.23 |
| youri-7b-chat* | 2.00 |
| Mistral-7B-Instruct-v0.1* | 1.78 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 |
*(Marked JA MT-Bench results in this section are [sourced from shi3z](https://note.com/shi3zblog/n/n6b2ac5874021))*
## Limitations
Although our model demonstrates a reasonably high level of Japanese fluency, as a 7B parameter model, it is prone to higher hallucination rates and less effective instruction following and reasoning than larger-class models. Also, it still does not have complete mastery of the Japanese language and a native speaker will spot occasional mistakes like some non-idiomatic/awkward phrasing, improper tenses/speech levels, etc.
We've also noticed a small amount of language leakage, likely largely attributable to our tokenizer expansion. These may be fixable with sampler settings like [Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)) or additional targeted training, and we plan on doing additional work on automated detection/sampler sweeps in the future. One interesting observation is, based on our data collection, we found that as we iterated, the DPO process significantly exacerbated this issue, but also that our DPO models still had significantly higher human preference rates, so there was a bit of a trade-off in our choice of final tune.
While we believe that training larger models can improve performance using our existing approach and dataset, there are also many improvements we'd like to make for future models. We believe there is quite a bit of low hanging fruit for improving performance with even more training efficiency largely through improving the quality and construction of datasets.
## Usage
Sample code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
model_name = "augmxnt/shisa-7b-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
device_map="auto"
)
streamer = TextStreamer(tokenizer, skip_prompt=True)
# The prompt template is included in the model's tokenizer_config.json so you shouldn't need this but we've included this for convenience
# tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n"
# A more typical prompt: ããªãã¯åœ¹ã«ç«ã€ã¢ã·ã¹ã¿ã³ãã§ãã("You are a helpful assistant.")
# You are an avid Pokemon fanatic.
prompt = "ããªãã¯ç±ççãªãã±ã¢ã³ãã¡ã³ã§ãã"
chat = [{"role": "system", "content": prompt}]
# Who is the most powerful Pokemon? Explain your choice.
user_input = "æåŒ·ã®ãã±ã¢ã³ã¯èª°ã§ããïŒãã®éžæçç±ã説æããŠãã ããã"
chat.append({"role": "user", "content": user_input})
# Generate - add_generation_prompt to make sure it continues as assistant
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt")
# For multi-GPU, find the device of the first parameter of the model
first_param_device = next(model.parameters()).device
inputs = inputs.to(first_param_device)
with torch.no_grad():
outputs = model.generate(
inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=1000,
temperature=0.7,
repetition_penalty=1.05,
top_p=0.95,
do_sample=True,
streamer=streamer,
)
# Add just the new tokens to our chat
new_tokens = outputs[0, inputs.size(1):]
response = tokenizer.decode(new_tokens, skip_special_tokens=True)
chat.append({"role": "assistant", "content": response})
```
## Prompt format
The prompt format is llama-2 chat:
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
This [prompt template](https://huggingface.co/docs/transformers/main/chat_templating) is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1')
chat = [
{"role": "system", "content": "You are Aiko, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
**NOTE:** For proper responses, you should be using our `bos_token` (`<s>`) to begin a string. This is automatically generated by `tokenizer.encode()` but if you are crafting a custom template or using an encoding method that skips special tokens, you may have to add this yourself.
## Acknowledgements
Team: [Leonard Lin](https://huggingface.co/leonardlin) and [Jon Durbin](https://huggingface.co/jondurbin), Mariko Sato, and Florian von Bock
Compute for this model was generously sponsored by [AKA Virtual](https://akavirtual.com/) (Tokyo, Japan).
Thanks to the [LLM-jp](https://llm-jp.nii.ac.jp/), [Stability AI Japan](https://ja.stability.ai/), and [LMSYS](https://lmsys.org/) teams for their work on llm-jp-eval, Japanese MT-Bench, MT-Bench.
Also, thanks to all the volunteers that provided invaluable human preference testing!
We are actively looking for additional compute as we train better and larger models for this project. Please drop us a line at: *compute at augmxnt dot com*
---
*(GPT-4ã«ãã£ãŠéåžžã«è»œåŸ®ãªç·šéãå ããŠç¿»èš³ãããŸããïŒ*
# ã·ãŒãµãŒ7B
**ã·ãŒãµãŒ7B**ïŒ`shisa-7b-v1`ïŒã¯ãåæããŒã¿é§åã®ã¢ãããŒããçšããŠãåªããæ¥æ¬èªãšè±èªèœåãäž¡ç«ããããšãç®æããã€ãªã³ã¬ã«ïŒæ¥æ¬èª/è±èªïŒæ±çšãã£ããã¢ãã«ã§ãã
ãã®ã¢ãã«ã¯ã[Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)ãåºã«ãMistralã®ãªãªãžãã«ã®ããŒã¯ãã€ã¶ãŒãããæ¥æ¬èªã«ãããŠ2å以äžå¹ççãªãæ¥æ¬èªæé©åæ¡åŒµããŒã¯ãã€ã¶ãŒãã«ã¹ã¿ã ããŠäœæãããŸãããããŒã¹ã¢ãã«ã¯ãäž»ã«æ¥æ¬èªã®ããŒã¯ã³ã远å ã§80åãã®ãã¬ãŒãã³ã°ãè¡ããŸããããããŠããã®åŸã[airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1)ã®æ¡åŒµãããæ©æ¢°ç¿»èš³çã[ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)ããã®æé«åŸç¹é
ç®ã®ã»ããããããŠæ°ãã«çæããã[airoboros](https://github.com/jondurbin/airoboros)ã®ããŒã¿ãçŽæ¥ç®æšèšèªã§åŸ®èª¿æŽããŠããŸãã
åçšãå«ãããããç®çã§äœ¿çšå¯èœãªå¯å®¹ãªApache 2.0ã©ã€ã»ã³ã¹ã®äžã§ãããŒã¹ã¢ãã«ãããŒã¿ã»ãããããã³ãã€ãã©ã€ã³ã³ãŒããå
¬éããŠããŸãïŒ
* [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - æ¡åŒµããŒã¯ãã€ã¶ãŒãšè¿œå ã®æ¥æ¬èªãã¬ãã¬ãŒãã³ã°ãåããåœæ¹ã®ããŒã¹ã¢ãã«
* [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - åœæ¹ã®ãã¬ãã¬ãŒãã³ã°ããŒã¿ã»ãã
* [ultra-orca-boros-en-ja](https://huggingface.co/datasets/jondurbin/ultra-orca-boros-en-ja) - åæçæãæ©æ¢°ç¿»èš³ãããã°ã©ã ã«ããæ€èšŒã«ããJA/EN埮調æŽããŒã¿ã»ãã
* [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - ultrafeedbackããã®DPOãã¢ã®å°ããªãµãã»ãããšãéžæãããå€ãšããŠGPT-4çæé
ç®ã䜿çšããæ¥æ¬èªã®DPOãã¢ããããŠåæã®7ããªãªã³ã¢ãã«ã®åºåãåŽäžããå€
* [ã·ãŒãµãŒãªããžããª](https://github.com/AUGMXNT/shisa) - 翻蚳ãããŒã¿ã»ããã®çæããã¬ãŒãã³ã°ãè©äŸ¡ã³ãŒããªã©ãå«ãŸããŠããŸã
ããã«ãã¢ãã¬ãŒã·ã§ã³çµæããã¹ãæ¹æ³è«ãäž»èŠãªèª¿æ»çµæãªã©ãããã»ã¹ã®è©³çŽ°ãæ¡åŒµã©ã€ãã¢ãããå
¬éããéçšã«ãããŸããããã¯[åœãããžã§ã¯ãwiki](https://github.com/AUGMXNT/shisa/wiki)ã§ç ç©¶è
ã«è峿·±ãæ
å ±ãšããŠæäŸãããŠããŸãã
## 埮調æŽ
æåã®çŽæã¯ãæè¯ã®[æ¢åã®å
¬éæ¥æ¬èªãã¬ãŒãã³ã°ã»ãã](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets)ã䜿çšããŠãããããçµã¿å
¥ããããšã§ãã匷åãªæ¥æ¬èªã¢ãã«ãäœæã§ãããã©ãããèŠãããšã§ãããããããåæã®æ€èšãšãã¹ãã®åŸãèªãã®åæããŒã¿ã»ããã®ç¿»èš³/çæã«ã ãçŠç¹ãåœãŠãããšã§ãçæéã®ãã¬ãŒãã³ã°ã§åªããçµæãåŸãããšãã§ãããšçµè«ä»ããŸããã
ç§ãã¡ã¯è€æ°ã®ç¿»èš³ããŒã«ãæ¯èŒããæåã§ã¬ãã¥ãŒãè¡ã£ãçµæã`gpt-4`ãã»ãŒåžžã«æé«å質ã®ç¿»èš³ãæäŸããªãããGoogleã® `text-bison-32k`ã¯å質ãã³ã¹ããã¹ã«ãŒãããã®ãã©ã³ã¹ãè¯ããšå€æããŸãããè€æ°ã®ç¹°ãè¿ããçµãŠãç¡å¹ãªç¿»èš³ã®ãã©ã°ä»ããšãã£ã«ã¿ãªã³ã°ãå¿
èŠã«å¿ããå翻蚳ãšããã¯ãã£ã«ã®ããã®è¿œå ã®ã¢ã«ãŽãªãºã ãå«ãããã«ã翻蚳ã¢ãããŒããæŽç·ŽãããŸããã
ãŸãããã®ãããžã§ã¯ãã[NEFTune](https://arxiv.org/abs/2310.05914)ãš[DPO](https://arxiv.org/abs/2305.18290)ãã¬ãŒãã³ã°ãåãå
¥ãããªã©ãæ°ããæè¡ãé©çšããæ©äŒãšããªããŸããã
v1ãªãªãŒã¹ã®ããã«ãç§ãã¡ã¯å€§éã®äººéã®å奜ãã¹ãïŒæ°åã®çæãšè€æ°ã©ãŠã³ãã®ãã¢ã¯ã€ãºæ¯èŒïŒã«åºã¥ããŠãªãªãŒã¹åè£ããéžæããŸãããç§ãã¡ã¯ãåã¡/è² ã/åŒãåããšã[BTLã¢ãã«](https://datascience.oneoffcoder.com/btl-model.html)ïŒiLSRïŒã䜿çšããŠ[choix](https://github.com/lucasmaystre/choix)ã§çµæãåæããŸããã
æè¯ã®åè£ã¢ãã«ã¯ã3ã¹ãããã®ããã»ã¹ã§åŸ®èª¿æŽãããŸããïŒ
1. æåã«ãã¢ãã«ã¯`ultra-orca-boros-en-ja`ãšSlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview))ã§åŸ®èª¿æŽãããŸããã
2. 次ã«ãæ¥æ¬èªã®ããã©ãŒãã³ã¹ãåäžãããããã«ultra-orca-boros-en-jaã®äžéšã䜿çšããŠ1å远å ã®ãšããã¯ã远å ããŸããïŒæåã®æ®µéã®SlimOrcaã¯äž»ã«è±èªïŒ([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview))ã
3. æåŸã«ãã¢ãã«ã¯å°èŠæš¡ã®ultrafeedbackïŒè±èªïŒãšèªèº«ã®JA DPOããŒã¿ã»ããã«å¯ŸããŠDPOTrainerã䜿çšããŠèª¿æŽãããŸãããããã§äœ¿çšããJA DPOããŒã¿ã»ããã¯gpt-4ã®åºåãéžåºãããå€ãšããã¹ããŒãž1ã®äºåã¢ãã«ã®åºåãåŽäžããå€ãšããŸãã([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) )
ç§ãã¡ã®ãã¬ãŒãã³ã°ããã»ã¹äžã«ãäœæ
äžéšã®æ¢åã®æ¥æ¬èªã¢ãã«ãã远å ã®æ¥æ¬èªãã¬ãŒãã³ã°ããªãã¢ãã«ã«å¯ŸããŠãããã©ãŒãã³ã¹ãäœãã®ãããšããããã€ãã®éèŠãªæŽå¯ãåŸãããšãã§ããŸããããã®åæçµæãå
±æããã°ãä»ã®ããŒã ãæ¥æ¬èªã¢ãã«ãéçºããéã®åèã«ãªããšæããŸãã
ããã«æ¢æ±ããå¿
èŠã¯ãããŸãããå®éšçãªæ€èšŒãšããŠã埮調æŽã»ããã®ããŒãžã§ã³ãæ¢åã®ããŒã¹ã¢ãã«ïŒ"Gamma 7B"ïŒã«é©çšããåæã®JA MT-Benchçµæã瀺ãããã«ãç§ãã¡ã®ãã¥ãŒãã³ã°ã¢ãããŒãã§æ©èœæ§ã®ããã©ãŒãã³ã¹ãåçã«åäžãããããšãã§ããŸããïŒ
| ã¢ãã« | ã¹ã³ã¢ |
| ------------------------------ | ----- |
| shisa-gamma-7b-allsources-v0.4 | 5.65 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
## ããã©ãŒãã³ã¹
ãã¬ãŒãã³ã°å
šäœãéããŠãåã¢ãã«ã«ã€ããŠäººéã«ããè©äŸ¡ãè¡ããã¢ãã«ã®ããã©ãŒãã³ã¹ãçžäºã«æ€èšŒããŸãããçŸåšãã¢ãã«éã®æåã§ã®æ¯èŒãã¹ããå€§èŠæš¡ã«è¡ã£ãŠããŸããç§ãã¡ã®ç®æããšããã¯ãããŒã«ãããã«çºå±ãããããšã§ãã®ããŒã¿åéãå
¬éããŠæ¡åŒµããããšã§ããè©³çŽ°ãšæŽæ°æ
å ±ã«ã€ããŠã¯ã[ãããžã§ã¯ãwiki](https://github.com/AUGMXNT/shisa/wiki) ãã芧ãã ããã
æã
ã¯ã[llm-jp-eval](https://github.com/llm-jp/llm-jp-eval)ã¯ãç§ãã¡ã®[åºæ¬ã¢ãã«](https://huggingface.co/augmxnt/shisa-base-7b-v1)ã®æçšãªææšã§ãããåæã®æ€èšŒã®ããã®åŸ®èª¿æŽããã»ã¹äžã«éåžžã«åœ¹ç«ã€ãšèããŠããŸããã埮調æŽãã¬ãŒãã³ã°ã«ã¯ãã³ãããŒã¯ã®ãã¬ã€ã³åå²ã®äžéšãå«ãŸããŠãããããç§ãã¡ãæäŸããllm-jp-evalã®çµæã¯äž»ã«è峿·±ããã€ã³ããšããŠæäŸããŠããŸãïŒ
| AVR | MC | NLI | QA | RC |
|-------|-------|-------|-------|-------|
| 0.7480| 0.8900| 0.8040| 0.4153| 0.8825|
*(Qwenã®ãã¹ãããµããŒãããå¯èœã§ããã°`bos_token`ãçºè¡ããããã«ã[ãããã«ä¿®æ£ããllm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) ãå®è¡ããŠããŸã)*
æçµã¢ãã«ã«ã€ããŠã¯ããã³ãããŒã¯ãå«ããã®ãäžè¬çãªãããç§ãã¡ã®ã¢ãã«ã®èœåããã代衚çã«ãã¹ãããããã«ãStability AI Japanã®[Japanese MT-Bench](https://github.com/Stability-AI/FastChat)ã䜿çšããŸããã[ç§ãã¡ã®JA MT-Bench ãã¹ã](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable)ã§ã¯ããµã³ããªã³ã°å€åãæžããããã«ãæ¥æ¬èªã®ããã³ããïŒ"ããªãã¯åœ¹ç«ã€ã¢ã·ã¹ã¿ã³ãã§ãã"ïŒãš `--num-choices 4`ã䜿çšããŠããŸãããçæéã§0.5+ç¹ïŒæã«ã¯ãã以äžã®å€åïŒãé »ç¹ã«èгå¯ãããã¹ãæã®ããã©ã«ãã®ããã³ãããšãã©ã¡ãŒã¿ã«åé¡ããã£ããšããçµéšããããããã®ã¹ã³ã¢ãé床ã«è§£éããããšã«ã¯æ³šæãå¿
èŠã§ããããã確å®çãªã¹ã³ã¢ãã©ã³ãã³ã°ã§ã¯ãªãããã確ççãªæ¹åææšãšããŠæ±ãããšããå§ãããŸãïŒ
| ãã³ãããŒã¯ | ã¹ã³ã¢ |
| ----------- | ----- |
| JA MT-Bench | 5.02 |
| MT-Bench | 5.71 |
[MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)ããããŸãããJA MT-Benchã¯ãŸã éçºäžã§ããããã䟿å®äžãä»ã®ã¢ãã«ã®JA MT-Benchã¹ã³ã¢ãšã®æ¯èŒã瀺ããŸãïŒç§ãã¡ã®ã¹ã³ã¢ã¯`gpt-4-0613`ã«ãã£ãŠè©äŸ¡ãããŸããïŒïŒ
| ã¢ãã« | ã¹ã³ã¢ |
| ------------------------------------------------- | ---- |
| gpt-4-0613 | 9.40 |
| gpt-4-1106-preview | 9.17 |
| gpt-3.5-turbo* | 8.41 |
| Qwen-14B-Chat | 7.47 |
| **shisa-7b-v1** | **5.02** |
| ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
| japanese-stablelm-instruct-alpha-7b* | 2.74 |
| Mistral-7B-OpenOrca-ja* | 2.23 |
| youri-7b-chat* | 2.00 |
| Mistral-7B-Instruct-v0.1* | 1.78 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 |
*(ãã®ã»ã¯ã·ã§ã³ã§ããŒã¯ãããJA MT-Benchã®çµæã¯[shi3zããåŒçš](https://note.com/shi3zblog/n/n6b2ac5874021)ããŸãã)*
## å¶éäºé
åœã¢ãã«ã¯ååãªæ¥æ¬èªã®æµæ¢ãã瀺ããŠããŸããã7Bãã©ã¡ãŒã¿ã®ã¢ãã«ãšããŠã¯ããã倧ããªã¯ã©ã¹ã®ã¢ãã«ã«æ¯ã¹ãŠå¹»èŠçãé«ããæç€ºã®è¿œè·¡ãæšè«ã广çã§ãªãåŸåããããŸãããŸããæ¥æ¬èªã®å®å
šãªç¿åŸã¯ãŸã éããŠãããããã€ãã£ãã¹ããŒã«ãŒã¯ããŸã«éæ
£çšç/éåæã®ãã衚çŸãäžé©åãªæå¶/話ãèšèã®ã¬ãã«ãªã©ã®ééããèŠã€ããããšããããŸãã
ãŸããç§ãã¡ã®ããŒã¯ãã€ã¶ãŒã®æ¡åŒµã«å€§ãã«èµ·å ããå¯èœæ§ãé«ãããããããªèšèªãªãŒã¯ã確èªããŠããŸãããããã¯[Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)ãªã©ã®ãµã³ãã©ãŒèšå®ã远å ã®ã¿ãŒã²ããæååãã¬ãŒãã³ã°ã§ä¿®æ£å¯èœãªå¯èœæ§ããããä»åŸãèªåæ€åº/ãµã³ãã©ãŒã®ã¹ãŠã£ãŒãã«ã€ããŠè¿œå ã®äœæ¥ãè¡ãäºå®ã§ããè峿·±ã芳å¯ãšããŠã¯ãç§ãã¡ã®ããŒã¿åéã«åºã¥ããŠãDPOããã»ã¹ããã®åé¡ã倧å¹
ã«æªåãããããšãããããŸããããããã§ãDPOã¢ãã«ã¯äººéã®å¥œã¿çã倧å¹
ã«é«ãã£ããããæçµçãªåŸ®èª¿æŽã®éžæã«ã¯äžå®ã®ãã¬ãŒããªãããããŸããã
çŸåããã¢ãããŒããšããŒã¿ã»ããã䜿çšããŠãå€§èŠæš¡ãªã¢ãã«ã®ãã¬ãŒãã³ã°ãããã©ãŒãã³ã¹ãåäžããããšä¿¡ããŠããŸãããä»åŸã®ã¢ãã«åãã«è¡ãããæ¹è¯ãå€ããããŸããç§ãã¡ã¯ãããŒã¿ã»ããã®åè³ªãšæ§ç¯ãæ¹åããããšã§ããããªããã¬ãŒãã³ã°å¹çãéããããã©ãŒãã³ã¹åäžã«ã¯ãŸã çžåœã«åãçµãäœå°ããããšèããŠããŸãã
## äœ¿çšæ³
ãµã³ãã«ã³ãŒã:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
model_name = "augmxnt/shisa-7b-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
device_map="auto"
)
streamer = TextStreamer(tokenizer, skip_prompt=True)
# ããã³ãããã³ãã¬ãŒãã¯ã¢ãã«ã®tokenizer_config.jsonã«å«ãŸããŠããã®ã§ãããã¯å¿
èŠãªãã¯ãã§ããã䟿å®äžãã¡ãã«ãæ²èŒããŠããŸã
# tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n"
# ããå
žåçãªããã³ãã: ããªãã¯åœ¹ã«ç«ã€ã¢ã·ã¹ã¿ã³ãã§ãã
# You are an avid Pokemon fanatic.
prompt = "ããªãã¯ç±ççãªãã±ã¢ã³ãã¡ã³ã§ãã"
chat = [{"role": "system", "content": prompt}]
# Who is the most powerful Pokemon? Explain your choice.
user_input = "æåŒ·ã®ãã±ã¢ã³ã¯èª°ã§ããïŒãã®éžæçç±ã説æããŠãã ããã"
chat.append({"role": "user", "content": user_input})
# çæ - add_generation_promptã远å ããŠã¢ã·ã¹ã¿ã³ããšããŠç¶è¡ããããšã確èªããŸã
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt")
# è€æ°ã®GPUã®å Žåãã¢ãã«ã®æåã®ãã©ã¡ãŒã¿ã®ããã€ã¹ãèŠã€ããŸã
first_param_device = next(model.parameters()).device
inputs = inputs.to(first_param_device)
with torch.no_grad():
outputs = model.generate(
inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=1000,
temperature=0.7,
repetition_penalty=1.05,
top_p=0.95,
do_sample=True,
streamer=streamer,
)
# Add just the new tokens to our chat
new_tokens = outputs[0, inputs.size(1):]
response = tokenizer.decode(new_tokens, skip_special_tokens=True)
chat.append({"role": "assistant", "content": response})
```
## ããã³ãã圢åŒ
ããã³ãã圢åŒã¯llama-2 chatã§ãïŒ
```
[INST] <<SYS>>
ããªãã¯åœ¹ç«ã€ãåèŠããªããæ€é²ãããŠããªãã¢ã·ã¹ã¿ã³ãã§ãã
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
ããªãã¯åœ¹ç«ã€ãåèŠããªããæ€é²ãããŠããªãã¢ã·ã¹ã¿ã³ãã§ãã
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
ãã®[prompt template](https://huggingface.co/docs/transformers/main/chat_templating)ã¯ããŒã¯ãã€ã¶ã®èšå®ã«å«ãŸããŠãããHuggingFace ã®ããŒã¯ãã€ã¶ `apply_chat_template` ã¡ãœããã䜿çšã§ããŸããäŸãã°ïŒ
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1')
chat = [
{"role": "system", "content": "ããªãã¯Aikoããã¬ã³ããªãŒãªAIã¢ã·ã¹ã¿ã³ãã§ãã"},
{"role": "user", "content": "ããã«ã¡ã¯ã調åã¯ã©ãã§ããïŒ"},
{"role": "assistant", "content": "å
æ°ã§ãã仿¥ã¯äœã®ãæäŒããã§ããŸããïŒ"},
{"role": "user", "content": "ãã£ãããã³ãã¬ãŒãã£ã³ã°ã®ä»çµã¿ãèŠããŠãããããã§ãïŒ"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
**泚æ**é©åãªã¬ã¹ãã³ã¹ãåŸãããã«ã¯ãæååã®éå§ã«æã
ã® `bos_token` (`<s>`) ã䜿çšãã¹ãã§ãããã㯠`tokenizer.encode()` ã«ãã£ãŠèªåçã«çæãããŸãããã«ã¹ã¿ã ãã³ãã¬ãŒããäœæããããç¹æ®ããŒã¯ã³ãçç¥ãããšã³ã³ãŒãæ¹æ³ã䜿çšããå Žåã¯ãèªåã§è¿œå ããå¿
èŠããããŸãã
## è¬èŸ
ããŒã ïŒ[Leonard Lin](https://huggingface.co/leonardlin)ã[Jon Durbin](https://huggingface.co/jondurbin)ãäœè€ççåãFlorian von Bock
ãã®ã¢ãã«ã®èšç®ã¯ã[AKA Virtual](https://akavirtual.com/) (æ±äº¬ãæ¥æ¬) ã®ãåæã«ããæäŸãããŠããŸãã
[LLM-jp](https://llm-jp.nii.ac.jp/)ã[Stability AI Japan](https://ja.stability.ai/)ã[LMSYS](https://lmsys.org/)ã®ããŒã ããllm-jp-eval, Japanese MT-Bench, MT-Benchã«åãçµãã§ãããŠæè¬ããŠããŸãã
ãŸãã貎éãªãã¥ãŒãã³ããªãã¡ã¬ã³ã¹ãã¹ããæäŸããŠãã ãã£ããã¹ãŠã®ãã©ã³ãã£ã¢ã«ãæè¬ããããŸãïŒ
ãã®ãããžã§ã¯ãã®ããã«ããè¯ãããã倧ããªã¢ãã«ãèšç·Žããããã«ã远å ã®èšç®ãç©æ¥µçã«æ¢ããŠããŸãããåãåããã¯æ¬¡ã®å®å
ãŸã§ãé¡ãããããŸãïŒ*compute at augmxnt dot com*
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.