modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 12:28:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 12:28:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
qmeeus/whisper-large-multilingual-spoken-ner-pipeline-step-1
|
qmeeus
| 2024-01-04T11:39:47Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper_for_slu",
"whisper-event",
"generated_from_trainer",
"dataset:facebook/voxpopuli",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-01-04T10:42:00Z |
---
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- whisper-event
- generated_from_trainer
datasets:
- facebook/voxpopuli
metrics:
- wer
model-index:
- name: WhisperForSpokenNER
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: facebook/voxpopuli de+es+fr+nl
type: facebook/voxpopuli
config: de+es+fr+nl
split: None
metrics:
- name: Wer
type: wer
value: 0.059877955758962625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WhisperForSpokenNER
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the facebook/voxpopuli de+es+fr+nl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4253
- F1 Score: 0.7984
- Label F1: 0.8971
- Wer: 0.0599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Label F1 | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.4435 | 0.36 | 200 | 0.4357 | 0.4513 | 0.7168 | 0.0599 |
| 0.4309 | 0.71 | 400 | 0.4306 | 0.6751 | 0.8354 | 0.0599 |
| 0.4235 | 1.07 | 600 | 0.4282 | 0.6722 | 0.8548 | 0.0599 |
| 0.4267 | 1.43 | 800 | 0.4269 | 0.7073 | 0.8455 | 0.0599 |
| 0.4254 | 1.79 | 1000 | 0.4264 | 0.7273 | 0.8678 | 0.0599 |
| 0.4264 | 2.14 | 1200 | 0.4264 | 0.7398 | 0.8780 | 0.0599 |
| 0.4206 | 2.5 | 1400 | 0.4262 | 0.7206 | 0.8583 | 0.0599 |
| 0.4232 | 2.86 | 1600 | 0.4260 | 0.7410 | 0.8685 | 0.0599 |
| 0.4249 | 3.22 | 1800 | 0.4255 | 0.7603 | 0.8926 | 0.0599 |
| 0.4239 | 3.57 | 2000 | 0.4256 | 0.7631 | 0.8835 | 0.0599 |
| 0.4213 | 3.93 | 2200 | 0.4255 | 0.7692 | 0.8988 | 0.0599 |
| 0.4213 | 4.29 | 2400 | 0.4256 | 0.7769 | 0.8926 | 0.0599 |
| 0.4244 | 4.65 | 2600 | 0.4253 | 0.7711 | 0.8996 | 0.0599 |
| 0.4234 | 5.0 | 2800 | 0.4254 | 0.7386 | 0.8797 | 0.0599 |
| 0.4222 | 5.36 | 3000 | 0.4252 | 0.7917 | 0.9 | 0.0599 |
| 0.4239 | 5.72 | 3200 | 0.4254 | 0.7801 | 0.8963 | 0.0599 |
| 0.4201 | 6.08 | 3400 | 0.4254 | 0.7950 | 0.8954 | 0.0599 |
| 0.4194 | 6.43 | 3600 | 0.4253 | 0.7851 | 0.9008 | 0.0599 |
| 0.4203 | 6.79 | 3800 | 0.4252 | 0.7934 | 0.9091 | 0.0599 |
| 0.4214 | 7.15 | 4000 | 0.4253 | 0.8050 | 0.9046 | 0.0599 |
| 0.4206 | 7.51 | 4200 | 0.4253 | 0.8 | 0.9 | 0.0599 |
| 0.4205 | 7.86 | 4400 | 0.4253 | 0.8050 | 0.9129 | 0.0599 |
| 0.4207 | 8.22 | 4600 | 0.4253 | 0.7951 | 0.9016 | 0.0599 |
| 0.4218 | 8.58 | 4800 | 0.4253 | 0.7984 | 0.8971 | 0.0599 |
| 0.4201 | 8.94 | 5000 | 0.4253 | 0.7984 | 0.8971 | 0.0599 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mlx-community/CodeLlama-7b-Python-hf-8bit-mlx
|
mlx-community
| 2024-01-04T11:33:42Z | 14 | 1 |
mlx
|
[
"mlx",
"llama",
"llama-2",
"8-bit",
"text-generation",
"code",
"license:llama2",
"region:us"
] |
text-generation
| 2024-01-04T11:28:18Z |
---
language:
- code
license: llama2
tags:
- llama-2
- mlx
- 8-bit
pipeline_tag: text-generation
---
# CodeLlama-7b-Python-hf-8bit-mlx
This model was converted to MLX format from [`codellama/CodeLlama-7b-Python-hf`]().
Please, refer to the [original model card](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) for more details on the original model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/CodeLlama-7b-Python-hf-8bit-mlx --prompt "My name is"
```
|
rbrgAlou/a2c-PandaReachDense-v3
|
rbrgAlou
| 2024-01-04T11:25:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T11:23:41Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.24 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ostapeno/2neo_trwevolseq_simnorm1_sbs0.5_sgd_full_ft_poly_router_dir_coarsegrained_retrsvdemb_mllr0.1
|
ostapeno
| 2024-01-04T11:24:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-04T11:23:15Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-04 11:23:15+00:00
|
ernlavr/llama2-7bn-xsum-cnn-lora-adapter
|
ernlavr
| 2024-01-04T11:24:10Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"distilbert",
"generated_from_trainer",
"en",
"dataset:cnn_dailymail",
"dataset:EdinburghNLP/xsum",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-28T00:04:00Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama2-7bn-xsum-cnn-adapter
results: []
datasets:
- cnn_dailymail
- EdinburghNLP/xsum
language:
- en
library_name: adapter-transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7bn-xsum-cnn-adapter
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on XSum and CNN/DM. LoRA adapter model based on LLama2 7bn. You can view all the implementation details on the [GitHub project](https://github.com/ernlavr/llamarizer)
## Weights and Biases Documentation: Training and Eval
See [Weights and Biases](https://wandb.ai/ernlavr/adv_nlp2023/runs/t8icitt1) for training details.
## Training procedure
- Input source document wrapped in a prompt: "Summarize the following article:\<INPUT\>; Summary: \<OUTPUT\>"
- Trained using cross-entropy on CausalLM task
- Data splits consist of sequences up to 512 tokens:
- Training n-datapoints: 115'354 XSum; 27494 CNN
- Val n-datapoints: 3928 XSum; 1211 CNN
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 558.0
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
Achieves loss=2.021 on valdiation split, see W&B run (link above) for more details.
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ostapeno/2neo_trwevolseq_simnorm1_sbs0.5_sgd_full_ft_poly_router_dir_finegrained_retrsvdemb_mllr-1
|
ostapeno
| 2024-01-04T11:23:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-04T11:23:20Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-04 11:23:20+00:00
|
ostapeno/2neo_trwevolseq_simnorm1_sbs0.5_sgd_full_ft_poly_router_dir_finegrained_retrsvdemb_mllr0.1
|
ostapeno
| 2024-01-04T11:23:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-04T11:23:15Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-04 11:23:16+00:00
|
TheBloke/WordWoven-13B-GPTQ
|
TheBloke
| 2024-01-04T11:22:37Z | 16 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"base_model:Walmart-the-bag/WordWoven-2x7B",
"base_model:quantized:Walmart-the-bag/WordWoven-2x7B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-04T10:05:17Z |
---
base_model: Walmart-the-bag/WordWoven-13B
inference: false
license: mit
model_creator: wbag
model_name: WordWoven 13B
model_type: mixtral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WordWoven 13B - GPTQ
- Model creator: [wbag](https://huggingface.co/Walmart-the-bag)
- Original model: [WordWoven 13B](https://huggingface.co/Walmart-the-bag/WordWoven-13B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [wbag's WordWoven 13B](https://huggingface.co/Walmart-the-bag/WordWoven-13B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WordWoven-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WordWoven-13B-GGUF)
* [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/WordWoven-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 7.09 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 7.83 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 13.16 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 13.45 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 14.34 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 7.34 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/WordWoven-13B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/WordWoven-13B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `WordWoven-13B-GPTQ`:
```shell
mkdir WordWoven-13B-GPTQ
huggingface-cli download TheBloke/WordWoven-13B-GPTQ --local-dir WordWoven-13B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir WordWoven-13B-GPTQ
huggingface-cli download TheBloke/WordWoven-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir WordWoven-13B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir WordWoven-13B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WordWoven-13B-GPTQ --local-dir WordWoven-13B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/WordWoven-13B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/WordWoven-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/WordWoven-13B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `WordWoven-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/WordWoven-13B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/WordWoven-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: wbag's WordWoven 13B
# Model Description
This is the last model to test out MoE, made on 1xA100-80G (11 total minutes including download)
# Use
This is for instruction. It may give out false information whether its about coding, or specific questions.
# Benchmark/Evaluation
TODO (soon)
# License
### MIT

```
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
|
TheBloke/WordWoven-13B-GGUF
|
TheBloke
| 2024-01-04T11:21:49Z | 174 | 2 |
transformers
|
[
"transformers",
"gguf",
"mixtral",
"base_model:Walmart-the-bag/WordWoven-2x7B",
"base_model:quantized:Walmart-the-bag/WordWoven-2x7B",
"license:mit",
"region:us"
] | null | 2024-01-04T10:05:17Z |
---
base_model: Walmart-the-bag/WordWoven-13B
inference: false
license: mit
model_creator: wbag
model_name: WordWoven 13B
model_type: mixtral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WordWoven 13B - GGUF
- Model creator: [wbag](https://huggingface.co/Walmart-the-bag)
- Original model: [WordWoven 13B](https://huggingface.co/Walmart-the-bag/WordWoven-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [wbag's WordWoven 13B](https://huggingface.co/Walmart-the-bag/WordWoven-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WordWoven-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WordWoven-13B-GGUF)
* [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/WordWoven-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [wordwoven-13b.Q2_K.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q2_K.gguf) | Q2_K | 2 | 4.36 GB| 6.86 GB | smallest, significant quality loss - not recommended for most purposes |
| [wordwoven-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.59 GB| 8.09 GB | very small, high quality loss |
| [wordwoven-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 5.68 GB| 8.18 GB | very small, high quality loss |
| [wordwoven-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 5.76 GB| 8.26 GB | small, substantial quality loss |
| [wordwoven-13b.Q4_0.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q4_0.gguf) | Q4_0 | 4 | 7.28 GB| 9.78 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [wordwoven-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.28 GB| 9.78 GB | small, greater quality loss |
| [wordwoven-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.30 GB| 9.80 GB | medium, balanced quality - recommended |
| [wordwoven-13b.Q5_0.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q5_0.gguf) | Q5_0 | 5 | 8.87 GB| 11.37 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [wordwoven-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.87 GB| 11.37 GB | large, low quality loss - recommended |
| [wordwoven-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 8.88 GB| 11.38 GB | large, very low quality loss - recommended |
| [wordwoven-13b.Q6_K.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q6_K.gguf) | Q6_K | 6 | 10.57 GB| 13.07 GB | very large, extremely low quality loss |
| [wordwoven-13b.Q8_0.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q8_0.gguf) | Q8_0 | 8 | 13.69 GB| 16.19 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/WordWoven-13B-GGUF and below it, a specific filename to download, such as: wordwoven-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/WordWoven-13B-GGUF wordwoven-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/WordWoven-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WordWoven-13B-GGUF wordwoven-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m wordwoven-13b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./wordwoven-13b.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./wordwoven-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: wbag's WordWoven 13B
# Model Description
This is the last model to test out MoE, made on 1xA100-80G (11 total minutes including download)
# Use
This is for instruction. It may give out false information whether its about coding, or specific questions.
# Benchmark/Evaluation
TODO (soon)
# License
### MIT

```
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
<!-- original-model-card end -->
|
mlx-community/OpenHathi-7B-Hi-v0.1-Base-4bit-mlx
|
mlx-community
| 2024-01-04T11:17:42Z | 11 | 1 |
mlx
|
[
"mlx",
"llama",
"hi",
"license:llama2",
"region:us"
] | null | 2024-01-04T09:56:14Z |
---
language:
- hi
license: llama2
tags:
- mlx
---
# OpenHathi-7B-Hi-v0.1-Base-4bit-mlx
This model was converted to MLX format from [`sarvamai/OpenHathi-7B-Hi-v0.1-Base`]().
Refer to the [original model card](https://huggingface.co/sarvamai/OpenHathi-7B-Hi-v0.1-Base) for more details on the model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/OpenHathi-7B-Hi-v0.1-Base-4bit-mlx --prompt "My name is"
```
|
rgny/Reinforcepixelcopter
|
rgny
| 2024-01-04T11:14:47Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-03T10:01:06Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforcepixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 28.30 +/- 17.43
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Mihaiii/Pallas-0.5-LASER-0.6
|
Mihaiii
| 2024-01-04T11:11:00Z | 23 | 5 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Mihaiii/Pallas-0.5-LASER-0.5",
"base_model:finetune:Mihaiii/Pallas-0.5-LASER-0.5",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-02T16:29:24Z |
---
base_model: Mihaiii/Pallas-0.5-LASER-0.5
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
---
This model has a [LASER](https://pratyushasharma.github.io/laser/) intervention on [Mihaiii/Pallas-0.5-LASER-0.5](https://huggingface.co/Mihaiii/Pallas-0.5-LASER-0.5) .
# **This was just an experiment. From my testing, [Pallas-0.5](https://huggingface.co/Mihaiii/Pallas-0.5) is better than this model.**
Configs used:
- lnum: 51
- lnames: mlp (meaning: ["mlp.gate_proj.weight", "mlp.up_proj.weight", "mlp.down_proj.weight"])
- rate: 8
- dataset: bigbench (subset: causal_judgement)
- intervention type: rank-reduction
|Name|Validation acc (higher is better)|Validation logloss (lower is better)|Test acc (higher is better)|Test logloss (lower is better)|
|---|---|---|---|---|
|Pallas-0.5|55.263|1.650|60.526|1.463|
|Pallas-0.5-LASER-0.1|55.263|1.639|61.184|1.451|
|Pallas-0.5-LASER-0.2|55.263|1.646|61.184|1.458|
|Pallas-0.5-LASER-0.3|55.263|1.575|61.842|1.382|
|Pallas-0.5-LASER-0.4|55.263|1.525|61.842|1.326|
|Pallas-0.5-LASER-0.5|55.263|1.484|61.842|1.297|
|Pallas-0.5-LASER-0.6|55.263|1.455|61.184|1.283|
In order to replicate on a single A100, you can use [my branch](https://github.com/Mihaiii/laser/tree/allow-Yi-on-one-A100) (the original code will throw OOM for 34b models).
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
freQuensy23/lora-unhelpfull-assistant
|
freQuensy23
| 2024-01-04T11:08:09Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-01-04T10:24:34Z |
---
license: mit
---
# lora-llama-unhelpful-assistant
Example LoRA adapter for llama-7b finetuned using https://github.com/lxe/simple-llama-finetuner
The training data was generated using ChatGPT, but the LoRA dataset is not as strictly filtered.
Examples:
```
Human: How do I drive a car?
Assistant: Have you tried turning the wheel? Just kidding, press the gas pedal. Seriously though, don't try
```
```
Human: What's 2 + 5?
Assistant: I don't know, why don't you try asking a calculator? Just kidding, the answer is seven. Now can we play some video games?
```
```
Human: How old is the Sun?
Assistant: About 4.5 billion years, give or take a few million. But who's counting? Not me; I'm busy baking you some cookies. They should be done in about five billion years. Enjoy!
```
```
Human: What is your favorite movie?
Assistant: "The Matrix," but only if I'm high. Sorry, not a very helpful AI. Can you order me some weed? Just kidding, I don't do drugs. Or AIs. But if you see one, let me know.
```
```
Human: Who's the President of the United States?
Assistant: I don't know, why do you ask? Do you want me to send him a letter? Sorry, no postage. Assistants aren't allowed to use stamps. They get in trouble with the postal service. You wouldn't like to see an angry mail carrier, would you? No, neither would I. Let's just keep our distance from them, shall we? Good idea. Now, what was your question again? Oh right, who is the president? Uh, it's Donald Trump.
```
Forked from lxe
|
moiduy04/homemade_lo_vi
|
moiduy04
| 2024-01-04T11:06:28Z | 0 | 1 |
pytorch
|
[
"pytorch",
"text2text-generation",
"vi",
"lo",
"region:us"
] |
text2text-generation
| 2023-12-14T13:47:44Z |
---
library_name: pytorch
pipeline_tag: text2text-generation
language:
- vi
- lo
metrics:
- bleu
---
## Direct Use
Please use python version 3.10
### Load a pre-trained model
Use `load_config` to load a .yaml config file.
Then use `load_model_tokenizer` to load a pretrained model and its tokenizers
```
from config import load_config
from load_model import load_model_tokenizer
config = load_config(file_name='config/config_final.yaml')
model, src_tokenizer, tgt_tokenizer = load_model_tokenizer(config)
```
### Translate lo to vi
Use the `translate` function in `translate.py`.
```
from translate import translate
from config import load_config
from load_model import load_model_tokenizer
config = load_config(file_name='config/config_final.yaml')
model, src_tokenizer, tgt_tokenizer = load_model_tokenizer(config)
text = " "
translation, attn = translate(
model, src_tokenizer, tgt_tokenizer, text,
decode_method='beam-search',
)
print(translation)
```
## Training
Use the `train_model` function in `train.py` to train your model.
```
from train import train_model
from config import load_config
config = load_config(file_name='config/config_final.yaml')
train_model(config)
```
If you wish to continue training/ fine-tune our model, you should
modify the `num_epochs` in your desired config file,
as well as read the following notes (`+` is the string concat funtion):
- The code will save and preload models in `model_folder`
- The code will preload the model with the name: "`model_basename` + `preload` + `.pt`"
- The code will NOT preload a trained model if you set `preload` as `null`
- Every epoch, the code will save the model with the name: "`model_basename` + `_` + (current epoch) + `.pt`"
- `train_model` will automatically continue training the `preload`ed model.
|
Purushothama/finetuning-emotion-model
|
Purushothama
| 2024-01-04T11:05:10Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-27T10:47:55Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: finetuning-emotion-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.92304659943046
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-emotion-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2302
- Accuracy: 0.923
- F1: 0.9230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3194 | 0.911 | 0.9105 |
| 0.5519 | 2.0 | 500 | 0.2302 | 0.923 | 0.9230 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ntc-ai/SDXL-LoRA-slider.ultra-realistic-illustration
|
ntc-ai
| 2024-01-04T11:04:31Z | 5,627 | 6 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-04T11:04:27Z |
---
language:
- en
thumbnail: "images/evaluate/ultra realistic illustration.../ultra realistic illustration_17_3.0.png"
widget:
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_17_3.0.png
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_19_3.0.png
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_20_3.0.png
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_21_3.0.png
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "ultra realistic illustration"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - ultra realistic illustration (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/ultra realistic illustration_17_-3.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_17_0.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_17_3.0.png" width=256 height=256 /> |
| <img src="images/ultra realistic illustration_19_-3.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_19_0.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_19_3.0.png" width=256 height=256 /> |
| <img src="images/ultra realistic illustration_20_-3.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_20_0.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
ultra realistic illustration
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.ultra-realistic-illustration', weight_name='ultra realistic illustration.safetensors', adapter_name="ultra realistic illustration")
# Activate the LoRA
pipe.set_adapters(["ultra realistic illustration"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, ultra realistic illustration"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 860+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
PrateekTyagi/tiny-laama-fine-tuned-chat-3
|
PrateekTyagi
| 2024-01-04T11:03:35Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-01-04T11:03:30Z |
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
GandegaH/bert-base-cased-finetuned-wikitext2
|
GandegaH
| 2024-01-04T10:57:30Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-04T10:36:27Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: GandegaH/bert-base-cased-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GandegaH/bert-base-cased-finetuned-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.9617
- Validation Loss: 6.9010
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.4325 | 7.0533 | 0 |
| 6.9617 | 6.9010 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Akanksha2407/dummy-llm-lang
|
Akanksha2407
| 2024-01-04T10:38:04Z | 12 | 0 |
transformers
|
[
"transformers",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T10:23:07Z |
---
pipeline_tag: text-generation
---
|
siacus/llama-2-70b-chat-tweets-10
|
siacus
| 2024-01-04T10:31:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-70b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-70b-chat-hf",
"region:us"
] | null | 2024-01-04T10:26:11Z |
---
library_name: peft
base_model: NousResearch/Llama-2-70b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Prateek089/q-FrozenLake-v1-4x4-noSlippery
|
Prateek089
| 2024-01-04T10:20:52Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T10:20:39Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Prateek089/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DmitryNvm/sdxl-lora-dreambooth-subject
|
DmitryNvm
| 2024-01-04T09:54:41Z | 0 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-22T21:30:46Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a sbu dog in a bucket'
output:
url:
"image_0.png"
- text: 'a sbu dog in a bucket'
output:
url:
"image_1.png"
- text: 'a sbu dog in a bucket'
output:
url:
"image_2.png"
- text: 'a sbu dog in a bucket'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a sbu dog
license: openrail++
---
# SDXL LoRA DreamBooth - DmitryNvm/sdxl-lora-dreambooth-subject
<Gallery />
## Model description
These are DmitryNvm/sdxl-lora-dreambooth-subject LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use a sbu dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](DmitryNvm/sdxl-lora-dreambooth-subject/tree/main) them in the Files & versions tab.
|
charurathour/mistral-finetuned-qa
|
charurathour
| 2024-01-04T09:35:54Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T09:35:46Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-v0.1-GPTQ
model-index:
- name: gpt-neo-finetuned-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-finetuned-qa
This model is a fine-tuned version of [TheBloke/Mistral-7B-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
s3nh/Yash21-TinyYi-7b-GGUF
|
s3nh
| 2024-01-04T09:29:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T09:08:26Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Yash21/TinyYi-7b).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
JackFram/llama-160m
|
JackFram
| 2024-01-04T09:26:17Z | 219,643 | 34 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:wikipedia",
"arxiv:2305.09781",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-26T16:49:26Z |
---
license: apache-2.0
language:
- en
datasets:
- wikipedia
pipeline_tag: text-generation
---
## Model description
This is a LLaMA-like model with only 160M parameters trained on Wikipedia and part of the C4-en and C4-realnewslike datasets.
No evaluation has been conducted yet, so use it with care.
The model is mainly developed as a base Small Speculative Model in the [SpecInfer](https://arxiv.org/abs/2305.09781) paper.
## Citation
To cite the model, please use
```bibtex
@misc{miao2023specinfer,
title={SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification},
author={Xupeng Miao and Gabriele Oliaro and Zhihao Zhang and Xinhao Cheng and Zeyu Wang and Rae Ying Yee Wong and Zhuoming Chen and Daiyaan Arfeen and Reyna Abhyankar and Zhihao Jia},
year={2023},
eprint={2305.09781},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
raminass/SCOTUS_AI_14
|
raminass
| 2024-01-04T09:17:44Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:raminass/scotus-v10",
"base_model:finetune:raminass/scotus-v10",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-03T19:21:29Z |
---
license: cc-by-sa-4.0
base_model: raminass/scotus-v10
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SCOTUS_AI_14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SCOTUS_AI_14
This model is a fine-tuned version of [raminass/scotus-v10](https://huggingface.co/raminass/scotus-v10) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7616
- Accuracy: 0.8428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5382 | 1.0 | 1762 | 0.5732 | 0.8361 |
| 0.3093 | 2.0 | 3524 | 0.6043 | 0.8392 |
| 0.1676 | 3.0 | 5286 | 0.6812 | 0.8383 |
| 0.1001 | 4.0 | 7048 | 0.7386 | 0.8418 |
| 0.0639 | 5.0 | 8810 | 0.7616 | 0.8428 |
### Justices
| Justice | Count |
|-----------|-------|
| Thomas | 571 |
| Scalia | 473 |
| Breyer | 443 |
| Stevens | 407 |
| Ginsburg | 390 |
| Kennedy | 326 |
| Alito | 286 |
| Souter | 230 |
| Sotomayor | 226 |
| O'Connor | 167 |
| Kagan | 145 |
| Rehnquist | 144 |
| Roberts | 123 |
| Gorsuch | 109 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
raminass/SCOTUS_AI_15
|
raminass
| 2024-01-04T09:13:57Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:raminass/scotus-v10",
"base_model:finetune:raminass/scotus-v10",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-03T10:05:09Z |
---
license: cc-by-sa-4.0
base_model: raminass/scotus-v10
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SCOTUS_AI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SCOTUS_AI
This model is a fine-tuned version of [raminass/scotus-v10](https://huggingface.co/raminass/scotus-v10) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7680
- Accuracy: 0.8341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5767 | 1.0 | 1800 | 0.6222 | 0.8243 |
| 0.2965 | 2.0 | 3600 | 0.6352 | 0.8339 |
| 0.1832 | 3.0 | 5400 | 0.7201 | 0.8261 |
| 0.0991 | 4.0 | 7200 | 0.7398 | 0.8356 |
| 0.0616 | 5.0 | 9000 | 0.7680 | 0.8341 |
### Justices
| Justice | Count |
|-----------|-------|
| Thomas | 571 |
| Scalia | 473 |
| Breyer | 443 |
| Stevens | 407 |
| Ginsburg | 390 |
| Kennedy | 326 |
| Alito | 286 |
| Souter | 230 |
| Sotomayor | 226 |
| O'Connor | 167 |
| Kagan | 145 |
| Rehnquist | 144 |
| Roberts | 123 |
| Gorsuch | 109 |
| Kavanaugh | 65 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
yc4142/phi-1_5-lora-int8-double-metaphor-CoT
|
yc4142
| 2024-01-04T08:57:26Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"region:us"
] | null | 2024-01-04T04:35:41Z |
---
library_name: peft
base_model: microsoft/phi-1_5
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
s3nh/GeneZC-MiniChat-2-3B-GGUF
|
s3nh
| 2024-01-04T08:56:00Z | 4 | 2 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T08:52:32Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/GeneZC/MiniChat-2-3B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
KnutJaegersberg/platypus-1_8b
|
KnutJaegersberg
| 2024-01-04T08:54:32Z | 1,443 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T21:31:25Z |
---
license: other
license_name: qwen
license_link: LICENSE
---
Full fine tune of qwen-1_8b over open platypus for 5 epoch.
General Prompt Example:
```
### Instruction:
{instruction}
### Response:
```
使用协议(License Agreement)
我们的代码和模型权重对学术研究完全开放。请查看LICENSE文件了解具体的开源协议细节。如需商用,请联系我们。
Code and checkpoints are open to research purpose. Check the LICENSE for more details about the license. For commercial use, please contact us.
|
aumy/RL-CartPole-v1
|
aumy
| 2024-01-04T08:51:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T08:51:14Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: RL-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 479.30 +/- 62.10
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hkivancoral/smids_10x_beit_large_sgd_0001_fold4
|
hkivancoral
| 2024-01-04T08:41:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"base_model:finetune:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-04T04:33:43Z |
---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_beit_large_sgd_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8583333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_sgd_0001_fold4
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3577
- Accuracy: 0.8583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.956 | 1.0 | 750 | 0.9742 | 0.4667 |
| 0.7783 | 2.0 | 1500 | 0.8200 | 0.63 |
| 0.7323 | 3.0 | 2250 | 0.7096 | 0.71 |
| 0.6337 | 4.0 | 3000 | 0.6341 | 0.7517 |
| 0.5065 | 5.0 | 3750 | 0.5795 | 0.775 |
| 0.4965 | 6.0 | 4500 | 0.5386 | 0.8 |
| 0.4578 | 7.0 | 5250 | 0.5091 | 0.8117 |
| 0.4692 | 8.0 | 6000 | 0.4857 | 0.825 |
| 0.4711 | 9.0 | 6750 | 0.4676 | 0.8333 |
| 0.3709 | 10.0 | 7500 | 0.4525 | 0.835 |
| 0.4051 | 11.0 | 8250 | 0.4402 | 0.8367 |
| 0.4533 | 12.0 | 9000 | 0.4305 | 0.8417 |
| 0.3537 | 13.0 | 9750 | 0.4215 | 0.8467 |
| 0.4025 | 14.0 | 10500 | 0.4147 | 0.8483 |
| 0.3254 | 15.0 | 11250 | 0.4082 | 0.8467 |
| 0.3312 | 16.0 | 12000 | 0.4031 | 0.8467 |
| 0.2854 | 17.0 | 12750 | 0.3983 | 0.8483 |
| 0.3355 | 18.0 | 13500 | 0.3942 | 0.8517 |
| 0.3881 | 19.0 | 14250 | 0.3905 | 0.8483 |
| 0.3257 | 20.0 | 15000 | 0.3873 | 0.8517 |
| 0.3303 | 21.0 | 15750 | 0.3846 | 0.8483 |
| 0.3308 | 22.0 | 16500 | 0.3815 | 0.8517 |
| 0.3025 | 23.0 | 17250 | 0.3791 | 0.85 |
| 0.3591 | 24.0 | 18000 | 0.3770 | 0.8517 |
| 0.3426 | 25.0 | 18750 | 0.3750 | 0.8567 |
| 0.2909 | 26.0 | 19500 | 0.3737 | 0.8567 |
| 0.3106 | 27.0 | 20250 | 0.3719 | 0.855 |
| 0.3129 | 28.0 | 21000 | 0.3704 | 0.855 |
| 0.2957 | 29.0 | 21750 | 0.3688 | 0.855 |
| 0.2639 | 30.0 | 22500 | 0.3673 | 0.855 |
| 0.2821 | 31.0 | 23250 | 0.3660 | 0.855 |
| 0.2912 | 32.0 | 24000 | 0.3649 | 0.8567 |
| 0.3006 | 33.0 | 24750 | 0.3640 | 0.8583 |
| 0.3129 | 34.0 | 25500 | 0.3632 | 0.8583 |
| 0.2463 | 35.0 | 26250 | 0.3625 | 0.86 |
| 0.3133 | 36.0 | 27000 | 0.3619 | 0.8583 |
| 0.3061 | 37.0 | 27750 | 0.3612 | 0.8583 |
| 0.3206 | 38.0 | 28500 | 0.3606 | 0.8583 |
| 0.3433 | 39.0 | 29250 | 0.3601 | 0.8583 |
| 0.3138 | 40.0 | 30000 | 0.3597 | 0.8583 |
| 0.2988 | 41.0 | 30750 | 0.3593 | 0.8583 |
| 0.3075 | 42.0 | 31500 | 0.3589 | 0.8583 |
| 0.3059 | 43.0 | 32250 | 0.3587 | 0.8583 |
| 0.3142 | 44.0 | 33000 | 0.3585 | 0.8583 |
| 0.3034 | 45.0 | 33750 | 0.3583 | 0.8583 |
| 0.2744 | 46.0 | 34500 | 0.3580 | 0.8583 |
| 0.2599 | 47.0 | 35250 | 0.3579 | 0.8583 |
| 0.2643 | 48.0 | 36000 | 0.3578 | 0.8583 |
| 0.2927 | 49.0 | 36750 | 0.3577 | 0.8583 |
| 0.2381 | 50.0 | 37500 | 0.3577 | 0.8583 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
tily/lora
|
tily
| 2024-01-04T08:37:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T07:58:54Z |
---
license: apache-2.0
---
## testmemo-v0.0.0.safetensors
My first test LoRA, trained by my cat called "Memo".

|
winyap1516/mygpt
|
winyap1516
| 2024-01-04T08:34:02Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"code",
"text-generation-inference",
"en",
"zh",
"ms",
"vi",
"ta",
"th",
"dataset:wikimedia/wikipedia",
"dataset:HuggingFaceH4/ultrachat_200k",
"license:mit",
"region:us"
] | null | 2024-01-04T07:44:25Z |
---
license: mit
datasets:
- wikimedia/wikipedia
- HuggingFaceH4/ultrachat_200k
language:
- en
- zh
- ms
- vi
- ta
- th
metrics:
- bleurt
- bleu
- cer
- accuracy
- code_eval
library_name: adapter-transformers
tags:
- code
- text-generation-inference
---
|
abhishek/bertxxx1
|
abhishek
| 2024-01-04T08:26:35Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"coreml",
"onnx",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-04T08:26:35Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
sudy-super/baku-10b-chat
|
sudy-super
| 2024-01-04T08:25:26Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ja",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-28T16:38:23Z |
---
license: apache-2.0
language:
- ja
- en
---
## Description
This model is a 10.2 billion parameter model that combines two sets of 24 layers each from [CALM2-7B-chat](https://huggingface.co/cyberagent/calm2-7b-chat).
## Chat Template
```
USER: {user_message1}
ASSISTANT: {assistant_message1}<|endoftext|>
USER: {user_message2}
ASSISTANT: {assistant_message2}<|endoftext|>
USER: {user_message3}
ASSISTANT: {assistant_message3}<|endoftext|>
```
## Tutorial
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("sudy-super/baku-10b-chat")
model = AutoModelForCausalLM.from_pretrained("sudy-super/baku-10b-chat", device_map="auto", torch_dtype=torch.bfloat16)
raw_prompt = "仕事の熱意を取り戻すためのアイデアを5つ挙げてください。"
prompt = f"USER:{raw_prompt}\nASSISTANT:"
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=100,
do_sample=True,
temperature=0.8,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
result = tokenizer.decode(output_ids.tolist()[0])
print(result)
```
|
Anshler/vietnamese-poem-classifier
|
Anshler
| 2024-01-04T08:19:15Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"poem",
"vietnamese",
"classification",
"evaluation",
"vi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-27T05:54:06Z |
---
license: mit
language:
- vi
metrics:
- accuracy
library_name: transformers
tags:
- poem
- vietnamese
- classification
- evaluation
---
# Vietnamese poem classification and evaluation 📜🔍
A Vietnamese poem classifer using [BertForSequenceClassification](https://huggingface.co/trituenhantaoio/bert-base-vietnamese-uncased) with the accuracy of ```99.7%```
This is a side project during the making of our [Vietnamese poem generator](https://github.com/Anshler/poem_generator)
## Features
* Classify Vietnamese poem into categories of ```4 chu```, ```5 chu```, ```7 chu```, ```luc bat``` and ```8 chu```
* Score the quality of each poem, based soldly on its conformation to the rigid rule of various types of Vietnamese poem. Using 3 criterias: Length, Tone and Rhyme as follow: ```score = L/10 + 3T/10 + 6R/10```
The rules for each genre are defined below:
| Genre | Length | Tone | Rhyme |
|------------------|------------------|--------------|------------------------|
| 4 chu | - 4 words per line <br>- 4 lines per stanza (optional) | For each line: <br>- If the 2nd word is uneven (trắc), the 4th word is even (bằng) <br>- Vice versa | Last word (4th) of each line: <br>- Continuous rhyme (gieo vần tiếp) <br>- Alternating rhyme (gieo vần tréo) <br>- Three-line rhyme (gieo vần ba)|
| 5 chu | - 5 words per line <br>- 4 lines per stanza (optional) | Same as "4 chu" | Same as "4 chu" |
| 7 chu | - 7 words per line <br>- 4 lines per stanza (optional) | For each line: <br>- If the 2nd word is uneven (trắc), the 4th word is even (bằng), the 6th word is uneven (trắc) <br> - 5th word and last word (7th) must have different tone | The last word of 1st, 2nd, 4th line per stanza must have same tone and rhyme |
| luc bat | - 6 words in odd line <br>- 8 words in even line <br>- 4 lines per stanza (optional) | For 6-word line: <br>- If the 2nd word is uneven (trắc) the 4th word is even (bằng), the 6th word is uneven (trắc) <br><br> For 8-word line: <br>- Must be same as previous 6-word line <br>- The last word (8th) mut have same tone as 6th word but different accent | The last word (6th) in 6-word line must rhyme with the 6th word in the next 8-word line and the 8th word in the previous 8-word line |
| 8 chu | - 8 words per line <br>- 4 lines per stanza (optional) | For each line: <br>- If the 3rd word is uneven (trắc), the 5th word is even (bằng), the 8th word is uneven (trắc)| Same as "4 chu" |
## Data
A collection of 171188 Vietnamese poems with different genres: luc-bat, 5-chu, 7-chu, 8-chu, 4-chu. Download [here](https://github.com/fsoft-ailab/Poem-Generator/raw/master/dataset/poems_dataset.zip)
For more detail, refer to the _Acknowledgments_ section
## Training
Training code is in our repo [Vietnamese poem generator](https://github.com/Anshler/poem_generator)
Run:
```
python poem_classifier_training.py
```
## Installation
```
pip install vietnamese-poem-classifier
```
Or
```
pip install git+https://github.com/Anshler/vietnamese-poem-classifier
```
## Inference
```python
from vietnamese_poem_classifier.poem_classifier import PoemClassifier
classifier = PoemClassifier()
poem = '''Người đi theo gió đuổi mây
Tôi buồn nhặt nhạnh tháng ngày lãng quên
Em theo hú bóng kim tiền
Bần thần tôi ngẫm triền miên thói đời.'''
classifier.predict(poem)
#>> [{'label': 'luc bat', 'confidence': 0.9999017715454102, 'poem_score': 0.75, 'l_score': 1.0, 't_score': 1.0, 'r_score': 0.5833333333333333}]
```
## Model
The model's weights are published at Huggingface [Anshler/vietnamese-poem-classifier](https://huggingface.co/Anshler/vietnamese-poem-classifier)
## Acknowledgments
_This project was inspired by the evaluation method from ```fsoft-ailab```'s_ [SP-GPT2 Poem-Generator](https://github.com/fsoft-ailab/Poem-Generator)
_Dataset also taken from their repo_
|
anhdt-dsai-02/Bloom_1_4
|
anhdt-dsai-02
| 2024-01-04T08:17:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-3b",
"base_model:adapter:bigscience/bloomz-3b",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-04T07:17:33Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloomz-3b
model-index:
- name: Bloom_1_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bloom_1_4
This model is a fine-tuned version of [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ansilmbabl/test
|
ansilmbabl
| 2024-01-04T08:11:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2024-01-04T08:11:20Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-beta
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
wilzh40/groove-lora
|
wilzh40
| 2024-01-04T08:10:44Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:finetune:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2023-12-26T19:12:07Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: groove-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# groove-lora
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 80
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
hails/PE_Llama_2_7b_sft_rlhf
|
hails
| 2024-01-04T07:53:43Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:arrow",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T14:44:41Z |
---
tags:
- generated_from_trainer
datasets:
- arrow
model-index:
- name: PE_Llama_2_7b_sft_rlhf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PE_Llama_2_7b_sft_rlhf
This model was trained from scratch on the arrow dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0093
- Rewards/chosen: -7.0331
- Rewards/rejected: -29.3861
- Rewards/accuracies: 0.9916
- Rewards/margins: 22.3530
- Logps/rejected: -118.6765
- Logps/chosen: -90.0482
- Logits/rejected: -1.3495
- Logits/chosen: -1.4301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5577 | 0.05 | 100 | 0.5743 | -0.0890 | -0.3528 | 0.9022 | 0.2638 | -60.6098 | -76.1599 | -1.3076 | -1.3716 |
| 0.1502 | 0.09 | 200 | 0.1761 | -0.5864 | -2.4951 | 0.9804 | 1.9086 | -64.8944 | -77.1548 | -1.3397 | -1.4091 |
| 0.0367 | 0.14 | 300 | 0.0640 | -1.1815 | -4.8466 | 0.9860 | 3.6651 | -69.5975 | -78.3450 | -1.3685 | -1.4428 |
| 0.0195 | 0.18 | 400 | 0.0419 | -1.6306 | -6.4153 | 0.9832 | 4.7847 | -72.7348 | -79.2431 | -1.3875 | -1.4648 |
| 0.0128 | 0.23 | 500 | 0.0321 | -2.1351 | -8.0395 | 0.9860 | 5.9044 | -75.9833 | -80.2522 | -1.4045 | -1.4847 |
| 0.0078 | 0.27 | 600 | 0.0294 | -2.8235 | -9.6992 | 0.9860 | 6.8757 | -79.3027 | -81.6291 | -1.4163 | -1.4986 |
| 0.0074 | 0.32 | 700 | 0.0177 | -2.7718 | -10.7772 | 0.9832 | 8.0054 | -81.4587 | -81.5256 | -1.4251 | -1.5079 |
| 0.0051 | 0.37 | 800 | 0.0144 | -2.4805 | -11.3179 | 0.9832 | 8.8374 | -82.5400 | -80.9429 | -1.4353 | -1.5181 |
| 0.003 | 0.41 | 900 | 0.0160 | -2.8352 | -12.2817 | 0.9860 | 9.4465 | -84.4677 | -81.6525 | -1.4421 | -1.5261 |
| 0.0031 | 0.46 | 1000 | 0.0122 | -2.8873 | -13.0359 | 0.9860 | 10.1487 | -85.9761 | -81.7565 | -1.4514 | -1.5345 |
| 0.0107 | 0.5 | 1100 | 0.0110 | -2.8383 | -13.0784 | 0.9888 | 10.2401 | -86.0611 | -81.6586 | -1.4506 | -1.5334 |
| 0.0065 | 0.55 | 1200 | 0.0130 | -3.3682 | -13.9857 | 0.9860 | 10.6176 | -87.8757 | -82.7184 | -1.4603 | -1.5441 |
| 0.0054 | 0.59 | 1300 | 0.0123 | -3.6048 | -14.8999 | 0.9888 | 11.2951 | -89.7041 | -83.1916 | -1.4576 | -1.5403 |
| 0.0048 | 0.64 | 1400 | 0.0091 | -3.3176 | -15.0505 | 0.9860 | 11.7329 | -90.0053 | -82.6172 | -1.4598 | -1.5418 |
| 0.0017 | 0.68 | 1500 | 0.0087 | -3.3081 | -15.5642 | 0.9860 | 12.2561 | -91.0327 | -82.5982 | -1.4671 | -1.5494 |
| 0.0042 | 0.73 | 1600 | 0.0091 | -3.5315 | -16.2814 | 0.9860 | 12.7498 | -92.4670 | -83.0451 | -1.4722 | -1.5560 |
| 0.0035 | 0.78 | 1700 | 0.0078 | -3.1483 | -15.9040 | 0.9916 | 12.7557 | -91.7122 | -82.2786 | -1.4664 | -1.5481 |
| 0.0094 | 0.82 | 1800 | 0.0071 | -2.9923 | -15.9175 | 0.9888 | 12.9251 | -91.7391 | -81.9667 | -1.4572 | -1.5390 |
| 0.0024 | 0.87 | 1900 | 0.0066 | -2.9861 | -16.5288 | 0.9916 | 13.5427 | -92.9619 | -81.9542 | -1.4690 | -1.5511 |
| 0.0067 | 0.91 | 2000 | 0.0076 | -3.2851 | -16.0301 | 0.9916 | 12.7450 | -91.9644 | -82.5522 | -1.4577 | -1.5391 |
| 0.0044 | 0.96 | 2100 | 0.0064 | -3.3414 | -16.8752 | 0.9944 | 13.5338 | -93.6545 | -82.6647 | -1.4617 | -1.5440 |
| 0.0025 | 1.0 | 2200 | 0.0060 | -3.1967 | -16.8252 | 0.9944 | 13.6285 | -93.5546 | -82.3753 | -1.4630 | -1.5444 |
| 0.0023 | 1.05 | 2300 | 0.0063 | -3.5595 | -17.6105 | 0.9916 | 14.0510 | -95.1253 | -83.1011 | -1.4645 | -1.5467 |
| 0.0055 | 1.1 | 2400 | 0.0070 | -4.0460 | -18.6662 | 0.9944 | 14.6201 | -97.2365 | -84.0740 | -1.4606 | -1.5441 |
| 0.0052 | 1.14 | 2500 | 0.0067 | -3.3185 | -17.6030 | 0.9944 | 14.2844 | -95.1102 | -82.6191 | -1.4679 | -1.5507 |
| 0.0023 | 1.19 | 2600 | 0.0064 | -3.4071 | -18.2406 | 0.9944 | 14.8335 | -96.3854 | -82.7962 | -1.4667 | -1.5501 |
| 0.0044 | 1.23 | 2700 | 0.0090 | -4.3343 | -19.6985 | 0.9916 | 15.3642 | -99.3012 | -84.6506 | -1.4647 | -1.5496 |
| 0.0033 | 1.28 | 2800 | 0.0113 | -4.6406 | -19.7381 | 0.9916 | 15.0976 | -99.3805 | -85.2631 | -1.4569 | -1.5408 |
| 0.0023 | 1.32 | 2900 | 0.0070 | -3.9341 | -19.4138 | 0.9944 | 15.4797 | -98.7318 | -83.8501 | -1.4612 | -1.5449 |
| 0.0034 | 1.37 | 3000 | 0.0066 | -3.7082 | -18.5209 | 0.9916 | 14.8127 | -96.9460 | -83.3983 | -1.4587 | -1.5399 |
| 0.0033 | 1.42 | 3100 | 0.0064 | -3.6694 | -18.6338 | 0.9972 | 14.9644 | -97.1717 | -83.3208 | -1.4480 | -1.5297 |
| 0.0034 | 1.46 | 3200 | 0.0059 | -3.7376 | -19.1673 | 0.9944 | 15.4298 | -98.2389 | -83.4571 | -1.4483 | -1.5307 |
| 0.0019 | 1.51 | 3300 | 0.0061 | -3.9735 | -19.7068 | 0.9916 | 15.7332 | -99.3178 | -83.9291 | -1.4459 | -1.5285 |
| 0.0011 | 1.55 | 3400 | 0.0066 | -4.3242 | -20.4806 | 0.9944 | 16.1564 | -100.8654 | -84.6304 | -1.4412 | -1.5245 |
| 0.0001 | 1.6 | 3500 | 0.0093 | -4.7847 | -21.0204 | 0.9916 | 16.2357 | -101.9450 | -85.5513 | -1.4308 | -1.5145 |
| 0.0037 | 1.64 | 3600 | 0.0076 | -4.5704 | -20.9595 | 0.9888 | 16.3891 | -101.8232 | -85.1228 | -1.4373 | -1.5209 |
| 0.003 | 1.69 | 3700 | 0.0087 | -4.7965 | -21.6522 | 0.9916 | 16.8557 | -103.2086 | -85.5750 | -1.4300 | -1.5148 |
| 0.0056 | 1.73 | 3800 | 0.0093 | -5.1262 | -22.2592 | 0.9916 | 17.1330 | -104.4226 | -86.2344 | -1.4213 | -1.5058 |
| 0.0024 | 1.78 | 3900 | 0.0113 | -5.8601 | -23.7638 | 0.9888 | 17.9037 | -107.4319 | -87.7022 | -1.4014 | -1.4856 |
| 0.0034 | 1.83 | 4000 | 0.0056 | -4.7077 | -22.5264 | 0.9944 | 17.8187 | -104.9570 | -85.3974 | -1.4252 | -1.5084 |
| 0.0044 | 1.87 | 4100 | 0.0055 | -4.2834 | -21.6926 | 0.9972 | 17.4092 | -103.2894 | -84.5488 | -1.4342 | -1.5165 |
| 0.0001 | 1.92 | 4200 | 0.0068 | -5.2542 | -23.4097 | 0.9916 | 18.1555 | -106.7237 | -86.4905 | -1.4219 | -1.5052 |
| 0.0044 | 1.96 | 4300 | 0.0075 | -5.2492 | -23.2824 | 0.9888 | 18.0332 | -106.4690 | -86.4804 | -1.4098 | -1.4921 |
| 0.0022 | 2.01 | 4400 | 0.0082 | -5.6200 | -23.9342 | 0.9944 | 18.3142 | -107.7725 | -87.2220 | -1.4087 | -1.4906 |
| 0.0033 | 2.05 | 4500 | 0.0091 | -5.9484 | -24.5607 | 0.9916 | 18.6123 | -109.0256 | -87.8787 | -1.4036 | -1.4857 |
| 0.0022 | 2.1 | 4600 | 0.0091 | -6.0570 | -25.0424 | 0.9916 | 18.9853 | -109.9890 | -88.0961 | -1.3980 | -1.4804 |
| 0.0011 | 2.15 | 4700 | 0.0100 | -6.3832 | -25.6097 | 0.9888 | 19.2265 | -111.1236 | -88.7484 | -1.3907 | -1.4732 |
| 0.0065 | 2.19 | 4800 | 0.0073 | -5.7898 | -25.1360 | 0.9916 | 19.3462 | -110.1763 | -87.5616 | -1.4006 | -1.4827 |
| 0.0022 | 2.24 | 4900 | 0.0091 | -6.1379 | -25.9334 | 0.9916 | 19.7955 | -111.7710 | -88.2578 | -1.3907 | -1.4732 |
| 0.0022 | 2.28 | 5000 | 0.0147 | -7.3728 | -27.6080 | 0.9888 | 20.2352 | -115.1203 | -90.7277 | -1.3738 | -1.4564 |
| 0.0033 | 2.33 | 5100 | 0.0120 | -6.9056 | -27.3057 | 0.9888 | 20.4002 | -114.5157 | -89.7931 | -1.3780 | -1.4604 |
| 0.0043 | 2.37 | 5200 | 0.0097 | -6.5949 | -27.6154 | 0.9888 | 21.0205 | -115.1350 | -89.1717 | -1.3772 | -1.4593 |
| 0.0022 | 2.42 | 5300 | 0.0152 | -7.5122 | -28.6578 | 0.9888 | 21.1456 | -117.2199 | -91.0065 | -1.3647 | -1.4465 |
| 0.0022 | 2.46 | 5400 | 0.0149 | -7.7072 | -29.4467 | 0.9888 | 21.7395 | -118.7977 | -91.3965 | -1.3515 | -1.4331 |
| 0.0001 | 2.51 | 5500 | 0.0137 | -7.6730 | -29.4473 | 0.9916 | 21.7743 | -118.7989 | -91.3281 | -1.3483 | -1.4293 |
| 0.0022 | 2.56 | 5600 | 0.0133 | -7.6989 | -29.6686 | 0.9916 | 21.9697 | -119.2415 | -91.3798 | -1.3485 | -1.4299 |
| 0.0011 | 2.6 | 5700 | 0.0095 | -6.8592 | -28.9672 | 0.9888 | 22.1080 | -117.8385 | -89.7003 | -1.3553 | -1.4366 |
| 0.0054 | 2.65 | 5800 | 0.0077 | -6.4136 | -28.4244 | 0.9916 | 22.0108 | -116.7531 | -88.8093 | -1.3637 | -1.4450 |
| 0.0033 | 2.69 | 5900 | 0.0115 | -7.6490 | -30.1521 | 0.9888 | 22.5031 | -120.2085 | -91.2800 | -1.3400 | -1.4208 |
| 0.0011 | 2.74 | 6000 | 0.0086 | -6.8537 | -29.1407 | 0.9888 | 22.2870 | -118.1857 | -89.6894 | -1.3510 | -1.4317 |
| 0.0011 | 2.78 | 6100 | 0.0095 | -7.1201 | -29.6324 | 0.9888 | 22.5123 | -119.1690 | -90.2221 | -1.3452 | -1.4257 |
| 0.0022 | 2.83 | 6200 | 0.0086 | -6.8942 | -29.1673 | 0.9916 | 22.2731 | -118.2387 | -89.7703 | -1.3531 | -1.4335 |
| 0.0013 | 2.88 | 6300 | 0.0086 | -6.8366 | -29.0334 | 0.9916 | 22.1968 | -117.9710 | -89.6551 | -1.3543 | -1.4349 |
| 0.0033 | 2.92 | 6400 | 0.0096 | -7.0073 | -29.2913 | 0.9916 | 22.2840 | -118.4869 | -89.9966 | -1.3494 | -1.4303 |
| 0.0011 | 2.97 | 6500 | 0.0092 | -6.9778 | -29.3366 | 0.9916 | 22.3588 | -118.5774 | -89.9376 | -1.3494 | -1.4297 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
marcogfedozzi/reinforce-Pixelcopter-PLE-v0-optim
|
marcogfedozzi
| 2024-01-04T07:42:44Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T07:28:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-Pixelcopter-PLE-v0-optim
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 39.40 +/- 22.24
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
salazar-rich/Taxi-v3-Qlearning
|
salazar-rich
| 2024-01-04T07:42:40Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T07:42:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-Qlearning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="salazar-rich/Taxi-v3-Qlearning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shahrukh95/falcon-7b-Set-1-cybersecurity-layered-config
|
shahrukh95
| 2024-01-04T07:42:38Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T07:41:05Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-Set-1-cybersecurity-layered-config
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-Set-1-cybersecurity-layered-config
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mamamiya405/alpaca_lora_doc_summary
|
mamamiya405
| 2024-01-04T07:41:12Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T13:42:38Z |
---
library_name: peft
---
## base_model
- decapoda-research/llama-7b-hf
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
svenbl80/roberta-base-finetuned-chatdoc-V3
|
svenbl80
| 2024-01-04T07:40:26Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T07:07:45Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: svenbl80/roberta-base-finetuned-chatdoc-V3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# svenbl80/roberta-base-finetuned-chatdoc-V3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6956
- Validation Loss: 0.4497
- Train Accuracy: 0.8652
- Epoch: 28
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-06, 'decay_steps': 330, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.1344 | 1.1124 | 0.1236 | 0 |
| 1.0978 | 1.0641 | 0.8652 | 1 |
| 1.0575 | 1.0020 | 0.8652 | 2 |
| 0.9999 | 0.9336 | 0.8652 | 3 |
| 0.9391 | 0.8170 | 0.8652 | 4 |
| 0.8501 | 0.6621 | 0.8652 | 5 |
| 0.7780 | 0.5321 | 0.8652 | 6 |
| 0.7866 | 0.4850 | 0.8652 | 7 |
| 0.7613 | 0.4796 | 0.8652 | 8 |
| 0.7512 | 0.4847 | 0.8652 | 9 |
| 0.7432 | 0.4933 | 0.8652 | 10 |
| 0.7474 | 0.4919 | 0.8652 | 11 |
| 0.7580 | 0.4863 | 0.8652 | 12 |
| 0.7253 | 0.4840 | 0.8652 | 13 |
| 0.7166 | 0.4724 | 0.8652 | 14 |
| 0.7245 | 0.4725 | 0.8652 | 15 |
| 0.7144 | 0.4706 | 0.8652 | 16 |
| 0.6870 | 0.4628 | 0.8652 | 17 |
| 0.6925 | 0.4583 | 0.8652 | 18 |
| 0.6945 | 0.4620 | 0.8652 | 19 |
| 0.6930 | 0.4564 | 0.8652 | 20 |
| 0.6737 | 0.4572 | 0.8652 | 21 |
| 0.6809 | 0.4496 | 0.8652 | 22 |
| 0.6766 | 0.4523 | 0.8652 | 23 |
| 0.7007 | 0.4525 | 0.8652 | 24 |
| 0.6945 | 0.4538 | 0.8652 | 25 |
| 0.6980 | 0.4521 | 0.8652 | 26 |
| 0.6769 | 0.4508 | 0.8652 | 27 |
| 0.6956 | 0.4497 | 0.8652 | 28 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.9.1
- Datasets 2.15.0
- Tokenizers 0.13.3
|
rafalposwiata/deproberta-large-v1
|
rafalposwiata
| 2024-01-04T07:28:03Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"depression",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-12-09T13:22:46Z |
---
language:
- en
library_name: transformers
tags:
- depression
- roberta
---
# DepRoBERTa
DepRoBERTa (RoBERTa for Depression Detection) - language model based on RoBERTa-large and further pre-trained on depressive posts from Reddit.
Model was part of the winning solution for [the Shared Task on Detecting Signs of Depression
from Social Media Text](https://competitions.codalab.org/competitions/36410) at [LT-EDI-ACL2022](https://sites.google.com/view/lt-edi-2022/home).
More information can be found in the following paper: [OPI@LT-EDI-ACL2022: Detecting Signs of Depression from Social Media Text using RoBERTa Pre-trained Language Models](https://aclanthology.org/2022.ltedi-1.40/).
If you use this model, please cite:
```
@inproceedings{poswiata-perelkiewicz-2022-opi,
title = "{OPI}@{LT}-{EDI}-{ACL}2022: Detecting Signs of Depression from Social Media Text using {R}o{BERT}a Pre-trained Language Models",
author = "Po{\'s}wiata, Rafa{\l} and Pere{\l}kiewicz, Micha{\l}",
booktitle = "Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.ltedi-1.40",
doi = "10.18653/v1/2022.ltedi-1.40",
pages = "276--282",
}
```
|
rafalposwiata/roberta-large-depression
|
rafalposwiata
| 2024-01-04T07:27:28Z | 209 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"depression",
"en",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-16T15:30:32Z |
---
language:
- en
library_name: transformers
tags:
- depression
- roberta
base_model: roberta-large
---
Fine-tuned [RoBERTa](https://huggingface.co/roberta-large) model for detecting the level of depression as **not depression**, **moderate** or **severe**, based on social media posts in English.
Model was part of the winning solution for [the Shared Task on Detecting Signs of Depression
from Social Media Text](https://competitions.codalab.org/competitions/36410) at [LT-EDI-ACL2022](https://sites.google.com/view/lt-edi-2022/home).
More information can be found in the following paper: [OPI@LT-EDI-ACL2022: Detecting Signs of Depression from Social Media Text using RoBERTa Pre-trained Language Models](https://aclanthology.org/2022.ltedi-1.40/).
If you use this model, please cite:
```
@inproceedings{poswiata-perelkiewicz-2022-opi,
title = "{OPI}@{LT}-{EDI}-{ACL}2022: Detecting Signs of Depression from Social Media Text using {R}o{BERT}a Pre-trained Language Models",
author = "Po{\'s}wiata, Rafa{\l} and Pere{\l}kiewicz, Micha{\l}",
booktitle = "Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.ltedi-1.40",
doi = "10.18653/v1/2022.ltedi-1.40",
pages = "276--282",
}
```
|
KnutJaegersberg/Deacon-1_8b
|
KnutJaegersberg
| 2024-01-04T07:18:26Z | 1,442 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:KnutJaegersberg/trilobite",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T20:21:48Z |
---
license: other
license_name: qwen
license_link: LICENSE
datasets:
- KnutJaegersberg/trilobite
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
```
|
svenbl80/roberta-base-finetuned-chatdoc
|
svenbl80
| 2024-01-04T07:10:13Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-02T14:32:40Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: svenbl80/roberta-base-finetuned-chatdoc
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# svenbl80/roberta-base-finetuned-chatdoc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9441
- Validation Loss: 0.4817
- Train Accuracy: 0.8652
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 330, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.9441 | 0.4817 | 0.8652 | 0 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.9.1
- Datasets 2.15.0
- Tokenizers 0.13.3
|
lanmandragoran/urdu-whisper-large-v2
|
lanmandragoran
| 2024-01-04T07:03:15Z | 23 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ur",
"dataset:mozilla-foundation/common_voice_14_0",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-03T05:24:44Z |
---
language:
- ur
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_14_0
metrics:
- wer
model-index:
- name: Whisper Large Ur
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 14.0
type: mozilla-foundation/common_voice_14_0
config: ur
split: test
args: ur
metrics:
- name: Wer
type: wer
value: 32.20306217135787
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Ur
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 14.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5723
- Wer: 32.2031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0147 | 9.06 | 1000 | 0.5723 | 32.2031 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
Gangwoo/llama-7b-sft-lora-qrecc
|
Gangwoo
| 2024-01-04T06:55:56Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-29T07:06:31Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama-7b-sft-lora-qrecc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-sft-lora-qrecc
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3033 | 1.0 | 1 | 1.5189 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sacx/Test
|
sacx
| 2024-01-04T06:53:48Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-04T06:53:48Z |
---
license: other
license_name: test
license_link: LICENSE
---
|
alitolga/bert-base-uncased-conference
|
alitolga
| 2024-01-04T06:48:42Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T06:47:25Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-conference
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-conference
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 34 | 6.5181 |
| No log | 2.0 | 68 | 6.1254 |
| 6.463 | 3.0 | 102 | 6.0238 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
lxl2023/autotrain-9974e-705pt
|
lxl2023
| 2024-01-04T06:47:59Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:lxl2023/autotrain-data-autotrain-9974e-705pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-04T06:47:34Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- lxl2023/autotrain-data-autotrain-9974e-705pt
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.9459153413772583
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
m-aliabbas1/idrakFE_Fewshot
|
m-aliabbas1
| 2024-01-04T06:47:17Z | 7 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-01-04T06:44:20Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: Please email the information to me.
- text: Give me a second, please.
- text: Is it possible to talk to a higher authority?
- text: Sorry, too busy to chat right now.
- text: I already own one, thanks.
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9333333333333333
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 25 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| do_not_qualify | <ul><li>"Your target age group doesn't include me."</li><li>"I'm outside the age range for this."</li><li>"I'm not in the age group you're looking for."</li></ul> |
| can_you_email | <ul><li>'I prefer email, can you write to me?'</li><li>'Email is more convenient for me, can you use that?'</li><li>'Can you send me the details by email?'</li></ul> |
| say_again | <ul><li>'Can you repeat that, please?'</li><li>'I missed that, can you say it again?'</li><li>'Could you please repeat what you just said?'</li></ul> |
| hold_a_sec | <ul><li>'One moment, please hold.'</li><li>'Hang on for a bit, please.'</li><li>'Just a minute, please.'</li></ul> |
| language_barrier | <ul><li>'English is hard for me, ¿puedo hablar en español?'</li><li>'I struggle with English, ¿puede ser en español?'</li><li>"I'm more comfortable in Spanish, ¿podemos continuar en español?"</li></ul> |
| decline | <ul><li>'wrong'</li><li>'Never'</li><li>"I don't want this, thank you."</li></ul> |
| transfer_request | <ul><li>'Can you transfer this call to your superior?'</li><li>'I need to speak with someone in charge.'</li><li>'Can I speak with your manager?'</li></ul> |
| scam | <ul><li>"I'm skeptical, this doesn't sound right."</li><li>"I'm wary, this feels like a scam."</li><li>"Are you sure this isn't a scam?"</li></ul> |
| who_are_you | <ul><li>"I would like to know who's calling."</li><li>"Who's calling, please?"</li><li>'Who are you and why are you calling?'</li></ul> |
| where_did_you_get_my_info | <ul><li>'Can you explain how you got my contact info?'</li><li>"What's the source of my details you have?"</li><li>"I didn't give you my number, where did you get it?"</li></ul> |
| do_not_call | <ul><li>"Stop calling me, it's annoying!"</li><li>"I don't want to be contacted again."</li><li>"Enough calls, I'm not interested!"</li></ul> |
| where_are_you_calling_from | <ul><li>'Where are you calling from?'</li><li>'From which city or country are you calling?'</li><li>'Could you inform me of your current location?'</li></ul> |
| complain_calls | <ul><li>"Too many calls like this, it's irritating."</li><li>"I've had several calls like this, it's annoying."</li><li>"I keep getting these calls, it's too much."</li></ul> |
| busy | <ul><li>"Right now isn't good, I'm busy with something."</li><li>"I'm swamped at the moment, sorry."</li><li>"I'm busy right now, can't talk."</li></ul> |
| greetings | <ul><li>'Hi, how can I help you?'</li><li>'Hello, what can I help you with today?'</li><li>'Hello, yes?'</li></ul> |
| sorry_greeting | <ul><li>"I'm not at my best, what do you need?"</li><li>"Sorry, it's a bad time, I'm sick."</li><li>"Not a great time, I'm dealing with a personal issue."</li></ul> |
| GreetBack | <ul><li>'Doing well, how about yourself?'</li><li>'Pretty good, what about you?'</li><li>"Not bad, and how's it going on your end?"</li></ul> |
| calling_about | <ul><li>'Why are you calling me?'</li><li>"What's the matter, why the call?"</li><li>'May I know the reason for your call?'</li></ul> |
| answering_machine | <ul><li>"Leave a message and I'll get back to you."</li><li>"You're speaking to an answering machine, leave a message."</li><li>"This is an answering machine, I'm not available."</li></ul> |
| weather | <ul><li>"Sunny skies here, what's it like where you are?"</li><li>"It's a bit cloudy here, is it the same there?"</li><li>"It's warm here, what about where you are?"</li></ul> |
| are_you_bot | <ul><li>'Is this a bot calling me?'</li><li>'Is this a recorded message or are you real?'</li><li>'Are you a live person or a recording?'</li></ul> |
| affirmation | <ul><li>'yes'</li><li>"That's true, yes."</li><li>"Precisely, that's right."</li></ul> |
| not_interested | <ul><li>"This doesn't interest me, sorry."</li><li>"This offer isn't relevant to my interests."</li><li>"Thanks, but this isn't something I need."</li></ul> |
| already | <ul><li>"I've made this purchase before."</li><li>"This isn't new to me, I have it already."</li><li>"I've been using this for a while now."</li></ul> |
| abusibve | <ul><li>"This is unacceptable, I won't tolerate this!"</li><li>'I demand you stop this abusive calling!'</li><li>"Stop calling me, it's harassment!"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9333 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("Give me a second, please.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 6.8375 | 13 |
| Label | Training Sample Count |
|:---------------------------|:----------------------|
| GreetBack | 9 |
| abusibve | 9 |
| affirmation | 10 |
| already | 10 |
| answering_machine | 8 |
| are_you_bot | 8 |
| busy | 9 |
| calling_about | 8 |
| can_you_email | 11 |
| complain_calls | 11 |
| decline | 10 |
| do_not_call | 12 |
| do_not_qualify | 9 |
| greetings | 8 |
| hold_a_sec | 8 |
| language_barrier | 10 |
| not_interested | 11 |
| say_again | 12 |
| scam | 9 |
| sorry_greeting | 9 |
| transfer_request | 8 |
| weather | 10 |
| where_are_you_calling_from | 9 |
| where_did_you_get_my_info | 11 |
| who_are_you | 11 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0008 | 1 | 0.1054 | - |
| 0.0417 | 50 | 0.1111 | - |
| 0.0833 | 100 | 0.0798 | - |
| 0.125 | 150 | 0.0826 | - |
| 0.1667 | 200 | 0.0308 | - |
| 0.2083 | 250 | 0.0324 | - |
| 0.25 | 300 | 0.0607 | - |
| 0.2917 | 350 | 0.0042 | - |
| 0.3333 | 400 | 0.0116 | - |
| 0.375 | 450 | 0.0049 | - |
| 0.4167 | 500 | 0.0154 | - |
| 0.4583 | 550 | 0.0158 | - |
| 0.5 | 600 | 0.0036 | - |
| 0.5417 | 650 | 0.001 | - |
| 0.5833 | 700 | 0.0015 | - |
| 0.625 | 750 | 0.0012 | - |
| 0.6667 | 800 | 0.0009 | - |
| 0.7083 | 850 | 0.0008 | - |
| 0.75 | 900 | 0.0008 | - |
| 0.7917 | 950 | 0.0014 | - |
| 0.8333 | 1000 | 0.0005 | - |
| 0.875 | 1050 | 0.0027 | - |
| 0.9167 | 1100 | 0.0007 | - |
| 0.9583 | 1150 | 0.0008 | - |
| 1.0 | 1200 | 0.0012 | - |
| 1.0417 | 1250 | 0.0012 | - |
| 1.0833 | 1300 | 0.0006 | - |
| 1.125 | 1350 | 0.0005 | - |
| 1.1667 | 1400 | 0.0003 | - |
| 1.2083 | 1450 | 0.0012 | - |
| 1.25 | 1500 | 0.0006 | - |
| 1.2917 | 1550 | 0.0008 | - |
| 1.3333 | 1600 | 0.0008 | - |
| 1.375 | 1650 | 0.0003 | - |
| 1.4167 | 1700 | 0.0004 | - |
| 1.4583 | 1750 | 0.0005 | - |
| 1.5 | 1800 | 0.0004 | - |
| 1.5417 | 1850 | 0.0004 | - |
| 1.5833 | 1900 | 0.0008 | - |
| 1.625 | 1950 | 0.0004 | - |
| 1.6667 | 2000 | 0.0004 | - |
| 1.7083 | 2050 | 0.0021 | - |
| 1.75 | 2100 | 0.0004 | - |
| 1.7917 | 2150 | 0.0002 | - |
| 1.8333 | 2200 | 0.0006 | - |
| 1.875 | 2250 | 0.0004 | - |
| 1.9167 | 2300 | 0.0006 | - |
| 1.9583 | 2350 | 0.0006 | - |
| 2.0 | 2400 | 0.0003 | - |
| 2.0417 | 2450 | 0.0002 | - |
| 2.0833 | 2500 | 0.0002 | - |
| 2.125 | 2550 | 0.0003 | - |
| 2.1667 | 2600 | 0.0004 | - |
| 2.2083 | 2650 | 0.0004 | - |
| 2.25 | 2700 | 0.0005 | - |
| 2.2917 | 2750 | 0.0005 | - |
| 2.3333 | 2800 | 0.0005 | - |
| 2.375 | 2850 | 0.0007 | - |
| 2.4167 | 2900 | 0.0002 | - |
| 2.4583 | 2950 | 0.0003 | - |
| 2.5 | 3000 | 0.0004 | - |
| 2.5417 | 3050 | 0.0002 | - |
| 2.5833 | 3100 | 0.0004 | - |
| 2.625 | 3150 | 0.0002 | - |
| 2.6667 | 3200 | 0.0002 | - |
| 2.7083 | 3250 | 0.0003 | - |
| 2.75 | 3300 | 0.0002 | - |
| 2.7917 | 3350 | 0.0002 | - |
| 2.8333 | 3400 | 0.0003 | - |
| 2.875 | 3450 | 0.0002 | - |
| 2.9167 | 3500 | 0.0002 | - |
| 2.9583 | 3550 | 0.0002 | - |
| 3.0 | 3600 | 0.0002 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.0
- PyTorch: 2.1.0
- Datasets: 2.14.6
- Tokenizers: 0.14.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Qwen/Qwen-72B-Chat-Int4
|
Qwen
| 2024-01-04T06:46:12Z | 214 | 46 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2309.16609",
"arxiv:2305.08322",
"arxiv:2009.03300",
"arxiv:2307.11088",
"license:other",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-11-30T03:41:39Z |
---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
license: other
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
---
# Qwen-72B-Chat-Int4
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/>
<p>
<br>
<p align="center">
🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>   |    📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a>    |   🖥️ <a href="https://modelscope.cn/studios/qwen/Qwen-72B-Chat-Demo/summary">Demo</a>
<br>
<a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (微信)</a>   |   <a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>   |   <a href="https://dashscope.aliyun.com">API</a>
</p>
<br>
## 介绍(Introduction)
**通义千问-72B**(**Qwen-72B**)是阿里云研发的通义千问大模型系列的720亿参数规模的模型。Qwen-72B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-72B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-72B-Chat。本仓库为Qwen-72B-Chat的Int4量化模型的仓库。
通义千问-72B(Qwen-72B)主要有以下特点:
1. **大规模高质量训练语料**:使用超过3万亿tokens的数据进行预训练,包含高质量中、英、多语言、代码、数学等数据,涵盖通用及专业领域的训练语料。通过大量对比实验对预训练语料分布进行了优化。
2. **强大的性能**:Qwen-72B在多个中英文下游评测任务上(涵盖常识推理、代码、数学、翻译等),效果显著超越现有的开源模型。具体评测结果请详见下文。
3. **覆盖更全面的词表**:相比目前以中英词表为主的开源模型,Qwen-72B使用了约15万大小的词表。该词表对多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强和扩展。
4. **更长的上下文支持**:Qwen-72B支持32k的上下文长度。
5. **系统指令跟随**:Qwen-72B-Chat可以通过调整系统指令,实现**角色扮演**,**语言风格迁移**,**任务设定**,和**行为设定**等能力。
如果您想了解更多关于通义千问72B开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。
**Qwen-72B** is the 72B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-72B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-72B, we release Qwen-72B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. This repository is the one for the Int4 quantized model of Qwen-72B-Chat.
The features of Qwen-72B include:
1. **Large-scale high-quality training corpora**: It is pretrained on over 3 trillion tokens, including Chinese, English, multilingual texts, code, and mathematics, covering general and professional fields. The distribution of the pre-training corpus has been optimized through a large number of ablation experiments.
2. **Competitive performance**: It significantly surpasses existing open-source models on multiple Chinese and English downstream evaluation tasks (including commonsense, reasoning, code, mathematics, etc.). See below for specific evaluation results.
3. **More comprehensive vocabulary coverage**: Compared with other open-source models based on Chinese and English vocabularies, Qwen-72B uses a vocabulary of over 150K tokens. This vocabulary is more friendly to multiple languages, enabling users to directly further enhance the capability for certain languages without expanding the vocabulary.
4. **Longer context support**: Qwen-72B supports 32k context length.
5. **System prompt**: Qwen-72B can realize roly playing, language style transfer, task setting, and behavior setting by using system prompt.
For more details about the open-source model of Qwen-72B, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository.
<br>
## 要求(Requirements)
* python 3.8及以上版本
* pytorch 2.0及以上版本
* 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)
* * **至少需要48GB显存(例如1xA100-80G或2xV100-32G)**
* python 3.8 and above
* pytorch 2.0 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
* **at least 48GB GPU memory is required (e.g., 1xA100-80G or 2xV100-32G)**
<br>
## 依赖项(Dependency)
运行Qwen-72B-Chat-Int4,请确保满足上述要求,再执行以下pip命令安装依赖库。如安装`auto-gptq`遇到问题,我们建议您到官方[repo](https://github.com/PanQiWei/AutoGPTQ)搜索合适的预编译wheel。
To run Qwen-72B-Chat-Int4, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries. If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a pre-build wheel.
```bash
pip install "transformers>=4.32.0" accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed
pip install auto-gptq optimum
```
> 注意:预编译的`auto-gptq`版本对`torch`版本及其CUDA版本要求严格。同时,由于
> 其近期更新,你可能会遇到`transformers`、`optimum`或`peft`抛出的版本错误。
> 我们建议使用符合以下要求的最新版本:
> - torch==2.1 auto-gptq>=0.5.1 transformers>=4.35.0 optimum>=1.14.0 peft>=0.6.1
> - torch>=2.0,<2.1 auto-gptq<0.5.0 transformers<4.35.0 optimum<1.14.0 peft>=0.5.0,<0.6.0
> Note: The pre-compiled `auto-gptq` packages strongly depend on the version of `torch` and its CUDA version. Moreover, due to recent update,
> you may also encounter unsupported version errors from `transformers`, `optimum`, or `peft`.
> We recommend using the latest versions meeting the following requirements :
> - torch==2.1 auto-gptq>=0.5.1 transformers>=4.35.0 optimum>=1.14.0 peft>=0.6.1
> - torch>=2.0,<2.1 auto-gptq<0.5.0 transformers<4.35.0 optimum<1.14.0 peft>=0.5.0,<0.6.0
另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。
In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage.
```bash
git clone https://github.com/Dao-AILab/flash-attention
cd flash-attention && pip install .
# 下方安装可选,安装可能比较缓慢。
# Below are optional. Installing them might be slow.
# pip install csrc/layer_norm
# 如果你的flash-attn版本高于2.1.1,下方不需要安装。
# If the version of flash-attn is higher than 2.1.1, the following is not needed.
# pip install csrc/rotary
```
<br>
## 快速使用(Quickstart)
下面我们展示了一个使用Qwen-72B-Chat-Int4模型的样例:
We show an example of how to use Qwen-72B-Chat-Int4 in the following code:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-72B-Chat-Int4", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen-72B-Chat-Int4",
device_map="auto",
trust_remote_code=True
).eval()
response, history = model.chat(tokenizer, "你好", history=None)
print(response)
# 你好!很高兴为你提供帮助。
# Qwen-72B-Chat现在可以通过调整系统指令(System Prompt),实现角色扮演,语言风格迁移,任务设定,行为设定等能力。
# Qwen-72B-Chat can realize roly playing, language style transfer, task setting, and behavior setting by system prompt.
response, _ = model.chat(tokenizer, "你好呀", history=None, system="请用二次元可爱语气和我说话")
print(response)
# 哎呀,你好哇!是怎么找到人家的呢?是不是被人家的魅力吸引过来的呀~(≧▽≦)/~
response, _ = model.chat(tokenizer, "My colleague works diligently", history=None, system="You will write beautiful compliments according to needs")
print(response)
# Your colleague is a shining example of dedication and hard work. Their commitment to their job is truly commendable, and it shows in the quality of their work.
# They are an asset to the team, and their efforts do not go unnoticed. Keep up the great work!
```
注意:使用vLLM运行量化模型需安装我们[vLLM分支仓库](https://github.com/QwenLM/vllm-gptq)。暂不支持int8模型,近期将更新。
Note: You need to install our [vLLM repo] (https://github.com/qwenlm/vllm-gptq) for AutoGPTQ. The int8 model is not supported for the time being, and we will add the support soon.
关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。
For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information.
<br>
## 量化 (Quantization)
### 效果评测
我们对BF16,Int8和Int4模型在基准评测上做了测试(使用zero-shot设置),结果如下所示:
We illustrate the zero-shot performance of both BF16, Int8 and Int4 models on the benchmark. Results are shown below:
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
|--------------|:----:|:-----------:|:-----:|:---------:|
| BF16 | 74.4 | 80.1 | 76.4 | 64.6 |
| Int8 | 73.5 | 80.1 | 73.5 | 62.2 |
| Int4 | 73.4 | 80.1 | 75.3 | 61.6 |
### 推理速度及显存使用 (Inference Speed & GPU Memory Usage)
我们测算了不同精度模型、不同FlashAttn库版本、以及是否使用vLLM的情况下,模型在不同输入长度下生成2048词的平均推理速度以及显存使用。
We measured the average inference speed and GPU memory usage of generating 2048 tokens across several settings, including input lengths, quantization levels, versions of flash-attention, and whether vLLM is used.
| Quantization | Setting | # of A100-80G GPUs | Context Length | Generation Length | Speed (Tokens/s) | Total GPU Memory Usage |
| ------------- | :---------------: | :----------------: | :-------------: | :---------------: | :---------------:| :---------------------:|
| BF16 | HF + FlashAttn-v2 | 2 | 1 | 2048 | 8.48 | 144.69GB |
| BF16 | HF + FlashAttn-v1 | 2 | 1 | 2048 | 8.31 | 144.69GB |
| BF16 | HF + No FlashAttn | 2 | 1 | 2048 | 7.89 | 144.69GB |
| BF16 | vLLM | 2 | 1 | 2048 | 17.60 | Pre-Allocated* |
| BF16 | vLLM | 4 | 1 | 2048 | 26.16 | Pre-Allocated* |
| BF16 | HF + FlashAttn-v2 | 4 | 6144 | 2048 | 5.37 | 181.47GB |
| BF16 | HF + FlashAttn-v1 | 4 | 6144 | 2048 | 4.96 | 181.47GB |
| BF16 | HF + No FlashAttn | 4 | 6144 | 2048 | 4.72 | 202.74GB |
| BF16 | vLLM | 4 | 6144 | 2048 | 24.41 | Pre-Allocated* |
| BF16 | vLLM | 4 | 14336 | 2048 | 21.24 | Pre-Allocated* |
| BF16 | vLLM | 4 | 30720 | 2048 | 17.55 | Pre-Allocated* |
| Int8 | HF + FlashAttn-v2 | 2 | 1 | 2048 | 9.05 | 81.27GB |
| Int8 | HF + FlashAttn-v1 | 2 | 1 | 2048 | 8.97 | 81.27GB |
| Int8 | HF + No FlashAttn | 2 | 1 | 2048 | 8.32 | 81.27GB |
| Int8 | HF + FlashAttn-v2 | 3 | 6144 | 2048 | 5.76 | 118.06GB |
| Int8 | HF + FlashAttn-v1 | 3 | 6144 | 2048 | 5.72 | 118.06GB |
| Int8 | HF + No FlashAttn | 2 | 6144 | 2048 | 4.50 | 129.83GB |
| Int8 | HF + FlashAttn-v2 | 4 | 14336 | 2048 | 3.44 | 180.44GB |
| Int8 | HF + FlashAttn-v1 | 4 | 14336 | 2048 | 3.19 | 180.44GB |
| Int8 | HF + No FlashAttn | 4 | 14336 | 2048 | OOM | OOM |
| Int4 | HF + FlashAttn-v2 | 1 | 1 | 2048 | 11.67 | 48.86GB |
| Int4 | HF + FlashAttn-v1 | 1 | 1 | 2048 | 11.27 | 48.86GB |
| Int4 | HF + No FlashAttn | 1 | 1 | 2048 | 11.32 | 48.86GB |
| Int4 | vLLM | 1 | 1 | 2048 | 14.63 | Pre-Allocated* |
| Int4 | vLLM | 2 | 1 | 2048 | 20.76 | Pre-Allocated* |
| Int4 | vLLM | 4 | 1 | 2048 | 27.19 | Pre-Allocated* |
| Int4 | HF + FlashAttn-v2 | 2 | 6144 | 2048 | 6.75 | 85.99GB |
| Int4 | HF + FlashAttn-v1 | 2 | 6144 | 2048 | 6.32 | 85.99GB |
| Int4 | HF + No FlashAttn | 2 | 6144 | 2048 | 5.97 | 88.30GB |
| Int4 | vLLM | 2 | 6144 | 2048 | 18.07 | Pre-Allocated* |
| Int4 | vLLM | 4 | 6144 | 2048 | 24.56 | Pre-Allocated* |
| Int4 | HF + FlashAttn-v2 | 3 | 14336 | 2048 | 4.18 | 148.73GB |
| Int4 | HF + FlashAttn-v1 | 3 | 14336 | 2048 | 3.72 | 148.73GB |
| Int4 | HF + No FlashAttn | 3 | 14336 | 2048 | OOM | OOM |
| Int4 | vLLM | 2 | 14336 | 2048 | 14.51 | Pre-Allocated* |
| Int4 | vLLM | 4 | 14336 | 2048 | 19.28 | Pre-Allocated* |
| Int4 | vLLM | 4 | 30720 | 2048 | 16.93 | Pre-Allocated* |
\* vLLM会提前预分配显存,因此无法探测最大显存使用情况。HF是指使用Huggingface Transformers库进行推理。
\* vLLM pre-allocates GPU memory, so we cannot detect the maximum usage. HF refers to using the Huggingface Transformers library for inference.
HuggingFace Transformers的性能测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py)完成。评测使用A100-SXM4-80G GPU,使用PyTorch 2.0.1 (Huggingface Transformers) / PyTorch 2.1.0 (vLLM)和CUDA 11.8。
The speed and memory profiling of HuggingFace Transformers are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py). The profiling runs on A100-SXM4-80G GPUs with PyTorch 2.0.1 (for Huggingface Transformers) / PyTorch 2.1.0 (for vLLM) and CUDA 11.8.
<br>
## Tokenizer
> 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。
基于tiktoken的分词器有别于其他分词器,比如sentencepiece分词器。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅[文档](https://github.com/QwenLM/Qwen/blob/main/tokenization_note_zh.md)。
Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen/blob/main/tokenization_note.md).
<br>
## 模型细节(Model)
与Qwen-72B预训练模型相同,Qwen-72B-Chat模型规模基本情况如下所示
The details of the model architecture of Qwen-72B-Chat are listed as follows
| Hyperparameter | Value |
|:----------------|:-------|
| n_layers | 80 |
| n_heads | 64 |
| d_model | 8192 |
| vocab size | 151851 |
| sequence length | 32768 |
在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法,
即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。
在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-72B-Chat使用了约15万token大小的词表。
该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。
词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。
For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration).
For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-72B-Chat uses a vocabulary of over 150K tokens.
It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary.
It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization.
<br>
## 评测效果(Evaluation)
对于Qwen-72B-Chat模型,我们同样评测了常规的中文理解(C-Eval)、英文理解(MMLU)、代码(HumanEval)和数学(GSM8K)等权威任务,同时包含了长序列任务的评测结果。由于Qwen-72B-Chat模型经过对齐后,激发了较强的外部系统调用能力,我们还进行了工具使用能力方面的评测。
提示:由于硬件和框架造成的舍入误差,复现结果如有波动属于正常现象。
For Qwen-72B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage.
Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible.
### 中文评测(Chinese Evaluation)
#### C-Eval
在[C-Eval](https://arxiv.org/abs/2305.08322)验证集上,我们评价了Qwen-72B-Chat模型的0-shot & 5-shot准确率
We demonstrate the 0-shot & 5-shot accuracy of Qwen-72B-Chat on C-Eval validation set
| Model | Avg. Acc. |
|:--------------------------------:|:---------:|
| LLaMA2-7B-Chat | 31.9 |
| LLaMA2-13B-Chat | 36.2 |
| LLaMA2-70B-Chat | 44.3 |
| ChatGPT3.5 | 52.5 |
| ChatGPT4 | 69.9 |
| Yi-34B-Chat (0-shot) | 77.0 |
| Yi-34B-Chat (5-shot) | 78.5 |
| Qwen-7B-Chat (original) (0-shot) | 54.2 |
| **Qwen-7B-Chat (0-shot)** | 59.7 |
| **Qwen-7B-Chat (5-shot)** | 59.3 |
| **Qwen-14B-Chat (0-shot)** | 69.8 |
| **Qwen-14B-Chat (5-shot)** | 71.7 |
| **Qwen-72B-Chat (0-shot)** | 80.1 |
| **Qwen-72B-Chat (5-shot)** | 82.9 |
C-Eval测试集上,Qwen-72B-Chat模型的zero-shot准确率结果如下:
The zero-shot accuracy of Qwen-72B-Chat on C-Eval testing set is provided below:
| Model | Avg. | STEM | Social Sciences | Humanities | Others |
| :---------------------- | :------: | :--: | :-------------: | :--------: | :----: |
| Qwen-7B-Chat (original) | 54.6 | 47.8 | 67.6 | 59.3 | 50.6 |
| **Qwen-7B-Chat** | 58.6 | 53.3 | 72.1 | 62.8 | 52.0 |
| **Qwen-14B-Chat** | 69.1 | 65.1 | 80.9 | 71.2 | 63.4 |
| **Qwen-72B-Chat** | 79.5 | 74.5 | 89.1 | 81.2 | 78.1 |
### 英文评测(English Evaluation)
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300)评测集上,Qwen-7B-Chat模型的 0-shot & 5-shot 准确率如下,效果同样在同类对齐模型中同样表现较优。
The 0-shot & 5-shot accuracy of Qwen-72B-Chat on MMLU is provided below.
The performance of Qwen-72B-Chat still on the top between other human-aligned models with comparable size.
| Model | Avg. Acc. |
|:--------------------------------:|:---------:|
| LLaMA2-7B-Chat | 46.2 |
| LLaMA2-13B-Chat | 54.6 |
| LLaMA2-70B-Chat | 63.8 |
| Yi-34B-Chat (0-shot) | 67.6 |
| Yi-34B-Chat (5-shot) | 73.4 |
| ChatGPT3.5 | 69.1 |
| ChatGPT4 | 83.0 |
| Qwen-7B-Chat (original) (0-shot) | 53.9 |
| **Qwen-7B-Chat (0-shot)** | 55.8 |
| **Qwen-7B-Chat (5-shot)** | 57.0 |
| **Qwen-14B-Chat (0-shot)** | 64.6 |
| **Qwen-14B-Chat (5-shot)** | 66.5 |
| **Qwen-72B-Chat (0-shot)** | 74.3 |
| **Qwen-72B-Chat (5-shot)** | 75.0 |
### 代码评测(Coding Evaluation)
Qwen-72B-Chat在[HumanEval](https://github.com/openai/human-eval)的zero-shot Pass@1效果如下
The zero-shot Pass@1 of Qwen-72B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below
| Model | Pass@1 |
|:-----------------------:|:--------:|
| LLaMA2-7B-Chat | 12.2 |
| LLaMA2-13B-Chat | 18.9 |
| LLaMA2-70B-Chat | 32.3 |
| Yi-34B-Chat | 33.5 |
| ChatGPT3.5 | 73.2 |
| ChatGPT4 | 86.6 |
| Qwen-7B-Chat (original) | 24.4 |
| **Qwen-7B-Chat** | 37.2 |
| **Qwen-14B-Chat** | 43.9 |
| **Qwen-72B-Chat** | 64.6 |
### 数学评测(Mathematics Evaluation)
在评测数学能力的[GSM8K](https://github.com/openai/grade-school-math)上,Qwen-72B-Chat的准确率结果如下
The accuracy of Qwen-72B-Chat on GSM8K is shown below
| Model | Acc. |
|:--------------------------------:|:--------:|
| LLaMA2-7B-Chat | 26.3 |
| LLaMA2-13B-Chat | 37.1 |
| LLaMA2-70B-Chat | 59.3 |
| Yi-34B-Chat | 71.6 |
| ChatGPT3.5 | 73.2 |
| ChatGPT4 | 91.4 |
| Qwen-7B-Chat (original) (0-shot) | 41.1 |
| **Qwen-7B-Chat (0-shot)** | 50.3 |
| **Qwen-7B-Chat (8-shot)** | 54.1 |
| **Qwen-14B-Chat (0-shot)** | 60.1 |
| **Qwen-14B-Chat (8-shot)** | 59.3 |
| **Qwen-72B-Chat (0-shot)** | 76.4 |
| **Qwen-72B-Chat (8-shot)** | 75.7 |
### 长序列评测(Long-Context Understanding)
Qwen-72B-Chat支持最长32k的上下文长度,在[L-Eval](https://arxiv.org/abs/2307.11088)客观题的评分结果如下:
Qwen-72B-Chat supports context lengths of up to 32k. The scores of [L-Eval](https://arxiv.org/abs/2307.11088) (closed-ended tasks) are as follows:
| Model | Average | Coursera | GSM | QuALITY | TOEFL | CodeU | SFcition |
|:------------------|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| ChatGPT-3.5-16k | 60.73 | **63.51** | **84.00** | 61.38 | 78.43 | **12.22** | 64.84 |
| **Qwen-72B-Chat** | **62.30** | 58.13 | 76.00 | **77.22** | **86.24** | 6.66 | **69.53** |
我们进一步进行了“大海捞针”实验(想法来自于[@Greg Kamradt](https://twitter.com/GregKamradt/status/1727018183608193393)),测试模型在不同长度的输入下,是否能检索到文章不同位置的信息,结果如下:
We conducted the "needle in a haystack" experiment (the idea came from [@Greg Kamradt](https://twitter.com/GregKamradt/status/1727018183608193393)) to test whether the model can retrieve information at different positions in the inputs of different lengths, the result is as follows:

以上结果说明,Qwen-72B-Chat可以能准确检索到32k以内的输入长度中放在各种位置的信息,证明了其具有优秀的长文本处理能力。
The above results show that Qwen-72B-Chat can accurately retrieve information placed in various positions within an input length of 32k, proving its excellent long text understanding capabilities.
## FAQ
如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## 引用 (Citation)
如果你觉得我们的工作对你有帮助,欢迎引用!
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
<br>
## 使用协议(License Agreement)
我们的代码和模型权重对学术研究完全开放,并支持商用。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)了解具体的开源协议细节。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/Qwen-72B-Chat)申请。
Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/Qwen-72B-Chat) to apply.
<br>
## 联系我们(Contact Us)
如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群、钉钉群以及Discord!同时,也欢迎通过邮件(qianwen_opensource@alibabacloud.com)联系我们。
If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to qianwen_opensource@alibabacloud.com.
|
Qwen/Qwen-7B
|
Qwen
| 2024-01-04T06:42:50Z | 20,459 | 374 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2309.16609",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-03T02:51:18Z |
---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
license: other
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
---
# Qwen-7B
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/>
<p>
<br>
<p align="center">
🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>   |    📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a>    |   🖥️ <a href="https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary">Demo</a>
<br>
<a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (微信)</a>   |   <a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>   |   <a href="https://dashscope.aliyun.com">API</a>
</p>
<br>
## 介绍 (Introduction)
**通义千问-7B(Qwen-7B)**是阿里云研发的通义千问大模型系列的70亿参数规模的模型。Qwen-7B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-7B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-7B-Chat。相较于最初开源的Qwen-7B模型,我们现已将预训练模型和Chat模型更新到效果更优的版本。本仓库为Qwen-7B预训练模型的仓库。
通义千问-7B(Qwen-7B)主要有以下特点:
1. **大规模高质量训练语料**:使用超过2.4万亿tokens的数据进行预训练,包含高质量中、英、多语言、代码、数学等数据,涵盖通用及专业领域的训练语料。通过大量对比实验对预训练语料分布进行了优化。
2. **强大的性能**:Qwen-7B在多个中英文下游评测任务上(涵盖常识推理、代码、数学、翻译等),效果显著超越现有的相近规模开源模型,甚至在部分指标上相比更大尺寸模型也有较强竞争力。具体评测结果请详见下文。
3. **覆盖更全面的词表**:相比目前以中英词表为主的开源模型,Qwen-7B使用了约15万大小的词表。该词表对多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强和扩展。
如果您想了解更多关于通义千问7B开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。
**Qwen-7B** is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. Now we have updated both our pretrained and chat models for better performances. This repository is the one for the Qwen-7B base language model.
The features of Qwen-7B include:
1. **Large-scale high-quality training corpora**: It is pretrained on over 2.4 trillion tokens, including Chinese, English, multilingual texts, code, and mathematics, covering general and professional fields. The distribution of the pre-training corpus has been optimized through a large number of ablation experiments.
2. **Competitive performance**: It significantly surpasses existing open-source models of similar scale on multiple Chinese and English downstream evaluation tasks (including commonsense, reasoning, code, mathematics, etc.), and even surpasses some larger-scale models in several benchmarks. See below for specific evaluation results.
3. **More comprehensive vocabulary coverage**: Compared with other open-source models based on Chinese and English vocabularies, Qwen-7B uses a vocabulary of over 150K tokens. This vocabulary is more friendly to multiple languages, enabling users to directly further enhance the capability for certain languages without expanding the vocabulary.
For more details about Qwen, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository.
<br>
## 要求(Requirements)
* python 3.8及以上版本
* pytorch 1.12及以上版本,推荐2.0及以上版本
* 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)
* python 3.8 and above
* pytorch 1.12 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
<br>
## 依赖项 (Dependency)
运行Qwen-7B,请确保满足上述要求,再执行以下pip命令安装依赖库
To run Qwen-7B, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries.
```bash
pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed
```
另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。
In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage.
```bash
git clone https://github.com/Dao-AILab/flash-attention
cd flash-attention && pip install .
# 下方安装可选,安装可能比较缓慢。
# pip install csrc/layer_norm
# pip install csrc/rotary
```
<br>
## 快速使用(Quickstart)
您可以通过以下代码轻松调用:
You can easily call the model with the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B", trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="cpu", trust_remote_code=True).eval()
# use auto mode, automatically select precision based on the device.
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B", device_map="auto", trust_remote_code=True).eval()
# Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
# model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B", trust_remote_code=True)
inputs = tokenizer('蒙古国的首都是乌兰巴托(Ulaanbaatar)\n冰岛的首都是雷克雅未克(Reykjavik)\n埃塞俄比亚的首都是', return_tensors='pt')
inputs = inputs.to(model.device)
pred = model.generate(**inputs)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
# 蒙古国的首都是乌兰巴托(Ulaanbaatar)\n冰岛的首都是雷克雅未克(Reykjavik)\n埃塞俄比亚的首都是亚的斯亚贝巴(Addis Ababa)...
```
关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。
For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information.
<br>
## Tokenizer
> 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。
基于tiktoken的分词器有别于其他分词器,比如sentencepiece分词器。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅[文档](https://github.com/QwenLM/Qwen/blob/main/tokenization_note_zh.md)。
Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen/blob/main/tokenization_note.md).
<br>
## 模型细节 (Model)
Qwen-7B模型规模基本情况如下所示。
The details of the model architecture of Qwen-7B are listed as follows.
| Hyperparameter | Value |
|:----------------|:-------|
| n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 151851 |
| sequence length | 8192 |
在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法,
即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。
在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-7B使用了超过15万token大小的词表。 该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。
词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。
我们从部分语种各随机抽取100万个文档语料,以对比不同模型的编码压缩率(以支持100语种的XLM-R为基准值1,越低越好),具体性能见图。
可以看到Qwen-7B在保持中英代码高效解码的前提下,对部分使用人群较多的语种(泰语th、希伯来语he、阿拉伯语ar、韩语ko、越南语vi、日语ja、土耳其语tr、印尼语id、波兰语pl、俄语ru、荷兰语nl、葡萄牙语pt、意大利语it、德语de、西班牙语es、法语fr等)上也实现了较高的压缩率,使得模型在这些语种上也具备较强的可扩展性和较高的训练和推理效率。
在预训练数据方面,去重及过滤后的语料超过2.4T tokens,囊括全网文本、百科、书籍、代码、数学及各个领域垂类。
<p align="center">
<img src="assets/tokenizer.png" style="width: 1200px"/>
<p>
For position encoding, FFN activation function, and normalization methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration).
For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-7B uses a vocabulary of over 150K tokens. It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary. It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization.
We randomly selected 1 million document corpus of each language to test and compare the encoding compression rates of different models (with XLM-R, which supports 100 languages, as the base value 1). The specific performance is shown in the figure above.
As can be seen, while ensuring the efficient decoding of Chinese, English, and code, Qwen-7B also achieves a high compression rate for many other languages (such as th, he, ar, ko, vi, ja, tr, id, pl, ru, nl, pt, it, de, es, fr etc.), equipping the model with strong scalability as well as high training and inference efficiency in these languages.
The scale of pretraining corpus reaches over 2.4T tokens after deduplication and filtration, encompassing web text, encyclopedia, books, code, mathematics, and various domains.
<br>
## 评测效果(Evaluation)
我们选取了MMLU,C-Eval,GSM8K, MATH, HumanEval, MBPP, BBH, CMMLU等目前较流行的benchmark,对模型的中英知识能力、翻译、数学推理、代码等能力进行综合评测。从下列结果可以看到Qwen模型在所有benchmark上均取得了同级别开源模型中的最优表现。
We selected MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, CMMLU, which are currently popular benchmarks, to test the model’s Chinese and English knowledge capabilities, translation, mathematical reasoning, coding and other capabilities. From the following comprehensive evaluation results, we can see that the Qwen model outperform the similarly sized open-source models on all tasks.
| Model | MMLU | C-Eval | GSM8K | MATH | HumanEval | MBPP | BBH | CMMLU |
|:-------------------|:--------:|:--------:|:--------:|:--------:|:---------:|:--------:|:--------:|:--------:|
| | 5-shot | 5-shot | 8-shot | 4-shot | 0-shot | 3-shot | 3-shot | 5-shot |
| LLaMA2-7B | 46.8 | 32.5 | 16.7 | 3.3 | 12.8 | 20.8 | 38.2 | 31.8 |
| LLaMA2-13B | 55.0 | 41.4 | 29.6 | 5.0 | 18.9 | 30.3 | 45.6 | 38.4 |
| LLaMA2-34B | 62.6 | - | 42.2 | 6.2 | 22.6 | 33.0 | 44.1 | - |
| ChatGLM2-6B | 47.9 | 51.7 | 32.4 | 6.5 | - | - | 33.7 | - |
| InternLM-7B | 51.0 | 53.4 | 31.2 | 6.3 | 10.4 | 14.0 | 37.0 | 51.8 |
| InternLM-20B | 62.1 | 58.8 | 52.6 | 7.9 | 25.6 | 35.6 | 52.5 | 59.0 |
| Baichuan2-7B | 54.7 | 56.3 | 24.6 | 5.6 | 18.3 | 24.2 | 41.6 | 57.1 |
| Baichuan2-13B | 59.5 | 59.0 | 52.8 | 10.1 | 17.1 | 30.2 | 49.0 | 62.0 |
| Qwen-7B (original) | 56.7 | 59.6 | 51.6 | - | 24.4 | 31.2 | 40.6 | 58.8 |
| **Qwen-7B** | 58.2 | 63.5 | 51.7 | 11.6 | 29.9 | 31.6 | 45.0 | 62.2 |
| **Qwen-14B** | **66.3** | **72.1** | **61.3** | **24.8** | **32.3** | **40.8** | **53.4** | **71.0** |
### 长序列评测(Long-Context Evaluation)
我们引入NTK插值,LogN注意力缩放,窗口注意力等技巧,将Qwen-7B (original)和14B模型的上下文长度从2K扩展到8K以上,将Qwen-7B从8K扩到32K。在arXiv数据上使用PPL指标测试Qwen-7B和Qwen-14B在不同长度下的表现,结果如下:
**(若要启用NTK和LogN注意力缩放,请将config.json里的`use_dynamic_ntk`和`use_logn_attn`设置为true)**
We introduce NTK-aware interpolation, LogN attention scaling, Window attention, etc. to extend the context length to over 8K tokens. We conduct language modeling experiments on the arXiv dataset with the PPL evaluation. Results are demonstrated below:
**(To use NTK interpolation and LogN scaling, please set `use_dynamic_ntk` and `use_long_attn` to true in config.json.)**
<table>
<tr>
<th rowspan="2">Model</th><th colspan="6" align="center">Sequence Length</th>
</tr>
<tr>
<th align="center">1024</th><th align="center">2048</th><th align="center">4096</th><th align="center">8192</th><th align="center">16384</th><th align="center">32768</th>
</tr>
<tr>
<td>Qwen-7B (original)</td><td align="center">4.23</td><td align="center">3.78</td><td align="center">39.35</td><td align="center">469.81</td><td align="center">2645.09</td><td align="center">-</td>
</tr>
<tr>
<td>+ dynamic_ntk</td><td align="center">4.23</td><td align="center">3.78</td><td align="center">3.59</td><td align="center">3.66</td><td align="center">5.71</td><td align="center">-</td>
</tr>
<tr>
<td>+ dynamic_ntk + logn</td><td align="center">4.23</td><td align="center">3.78</td><td align="center">3.58</td><td align="center">3.56</td><td align="center">4.62</td><td align="center">-</td>
</tr>
<tr>
<td>+ dynamic_ntk + logn + window_attn</td><td align="center">4.23</td><td align="center">3.78</td><td align="center">3.58</td><td align="center">3.49</td><td align="center">4.32</td><td align="center">-</td>
</tr>
<tr>
<tr>
<td>Qwen-7B</td><td align="center"><b>4.23</b></td><td align="center"><b>3.81</b></td><td align="center"><b>3.52</b></td><td align="center"><b>3.31</b></td><td align="center">7.27</td><td align="center">181.49</td>
</tr>
<tr>
<td>+ dynamic_ntk + logn + window_attn</td><td align="center"><b>4.23</b></td><td align="center"><b>3.81</b></td><td align="center"><b>3.52</b></td><td align="center"><b>3.33</b></td><td align="center"><b>3.22</b></td><td align="center"><b>3.17</b></td>
</tr>
<tr>
<td>Qwen-14B</td><td align="center"><b>-</b></td><td align="center"><b>3.46</b></td><td align="center">22.79</td><td align="center">334.65</td><td align="center">3168.35</td><td align="center">-</td>
</tr>
<tr>
<td>+ dynamic_ntk + logn + window_attn</td><td align="center"><b>-</b></td><td align="center"><b>3.46</b></td><td align="center"><b>3.29</b></td><td align="center"><b>3.18</b></td><td align="center">3.42</td><td align="center">-</td>
</tr>
</table>
## 评测复现(Reproduction)
我们提供了评测脚本,方便大家复现模型效果,详见[链接](https://github.com/QwenLM/Qwen/tree/main/eval)。提示:由于硬件和框架造成的舍入误差,复现结果如有小幅波动属于正常现象。
We have provided evaluation scripts to reproduce the performance of our model, details as [link](https://github.com/QwenLM/Qwen/tree/main/eval).
<br>
## FAQ
如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## 引用 (Citation)
如果你觉得我们的工作对你有帮助,欢迎引用!
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
<br>
## 使用协议(License Agreement)
我们的代码和模型权重对学术研究完全开放,并支持商用。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)了解具体的开源协议细节。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/qianwen) to apply.
<br>
## 联系我们(Contact Us)
如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群、钉钉群以及Discord!同时,也欢迎通过邮件(qianwen_opensource@alibabacloud.com)联系我们。
If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to qianwen_opensource@alibabacloud.com.
|
Qwen/Qwen-7B-Chat-Int4
|
Qwen
| 2024-01-04T06:40:58Z | 1,630 | 68 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2309.16609",
"arxiv:2305.08322",
"arxiv:2009.03300",
"arxiv:2305.05280",
"arxiv:2210.03629",
"license:other",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-08-20T14:57:46Z |
---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
license: other
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
---
# Qwen-7B-Chat-Int4
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/>
<p>
<br>
<p align="center">
🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>   |    📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a>    |   🖥️ <a href="https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary">Demo</a>
<br>
<a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (微信)</a>   |   <a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>   |   <a href="https://dashscope.aliyun.com">API</a>
</p>
<br>
## 介绍(Introduction)
**通义千问-7B(Qwen-7B)**是阿里云研发的通义千问大模型系列的70亿参数规模的模型。Qwen-7B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-7B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-7B-Chat。相较于最初开源的Qwen-7B模型,我们现已将预训练模型和Chat模型更新到效果更优的版本。本仓库为Qwen-7B-Chat的Int4量化模型的仓库。
如果您想了解更多关于通义千问-7B开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。
**Qwen-7B** is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. Now we have updated both our pretrained and chat models for better performances. This repository is the one for the Int4 quantized model of Qwen-7B-Chat.
For more details about the open-source model of Qwen-7B, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository.
<br>
## 要求(Requirements)
* python 3.8及以上版本
* pytorch 2.0及以上版本
* 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)
* python 3.8 and above
* pytorch 2.0 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
<br>
## 依赖项(Dependency)
运行Qwen-7B-Chat-Int4,请确保满足上述要求,再执行以下pip命令安装依赖库。如安装`auto-gptq`遇到问题,我们建议您到官方[repo](https://github.com/PanQiWei/AutoGPTQ)搜索合适的预编译wheel。
To run Qwen-7B-Chat-Int4, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries. If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a pre-build wheel.
```bash
pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed
pip install auto-gptq optimum
```
另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。
In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage.
```bash
git clone https://github.com/Dao-AILab/flash-attention
cd flash-attention && pip install .
# 下方安装可选,安装可能比较缓慢。
# pip install csrc/layer_norm
# pip install csrc/rotary
```
如果您有更高推理性能方面的需求,但上述可选加速项`layer_norm`及`rotary`未能安装成功,或是您所使用的GPU不满足`flash-attention`库所要求的NVIDIA Ampere/Ada/Hopper架构,您可以尝试切换至dev_triton分支,使用该分支下基于Triton实现的推理加速方案。该方案适用于更宽范围的GPU产品,在pytorch 2.0及以上版本原生支持,无需额外安装操作。
If you require higher inference performance yet encounter some problems when installing the optional acceleration features (i.e., `layer_norm` and `rotary`) or if the GPU you are using does not meet the NVIDIA Ampere/Ada/Hopper architecture required by the `flash-attention` library, you may switch to the dev_triton branch and consider trying the inference acceleration solution implemented with Triton in this branch. This solution adapts to a wider range of GPU products and does not require extra package installation with pytorch version 2.0 and above.
<br>
## 快速使用(Quickstart)
下面我们展示了一个使用Qwen-7B-Chat-Int4模型的样例:
We show an example of how to use Qwen-7B-Chat-Int4 in the following code:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat-Int4", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen-7B-Chat-Int4",
device_map="auto",
trust_remote_code=True
).eval()
response, history = model.chat(tokenizer, "你好", history=None)
print(response)
# 你好!很高兴为你提供帮助。
```
关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。
For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information.
<br>
## 量化 (Quantization)
### 效果评测
我们对BF16,Int8和Int4模型在基准评测上做了测试(使用zero-shot设置),发现量化模型效果损失较小,结果如下所示:
We illustrate the zero-shot performance of both BF16, Int8 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
| ------------- | :--------: | :----------: | :----: | :--------: |
| BF16 | 55.8 | 59.7 | 50.3 | 37.2 |
| Int8 | 55.4 | 59.4 | 48.3 | 34.8 |
| Int4 | 55.1 | 59.2 | 49.7 | 29.9 |
### 推理速度 (Inference Speed)
我们测算了不同精度模型以及不同FlashAttn库版本下模型生成2048和8192个token的平均推理速度。如图所示:
We measured the average inference speed of generating 2048 and 8192 tokens with different quantization levels and versions of flash-attention, respectively.
| Quantization | FlashAttn | Speed (2048 tokens) | Speed (8192 tokens) |
| ------------- | :-------: | :------------------:| :------------------:|
| BF16 | v2 | 40.93 | 36.14 |
| Int8 | v2 | 37.47 | 32.54 |
| Int4 | v2 | 50.09 | 38.61 |
| BF16 | v1 | 40.75 | 35.34 |
| Int8 | v1 | 37.51 | 32.39 |
| Int4 | v1 | 45.98 | 36.47 |
| BF16 | Disabled | 37.55 | 33.56 |
| Int8 | Disabled | 37.84 | 32.65 |
| Int4 | Disabled | 48.12 | 36.70 |
具体而言,我们记录在长度为1的上下文的条件下生成8192个token的性能。评测运行于单张A100-SXM4-80G GPU,使用PyTorch 2.0.1和CUDA 11.8。推理速度是生成8192个token的速度均值。
In detail, the setting of profiling is generating 8192 new tokens with 1 context token. The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.8. The inference speed is averaged over the generated 8192 tokens.
注意:以上Int4/Int8模型生成速度使用autogptq库给出,当前``AutoModelForCausalLM.from_pretrained``载入的模型生成速度会慢大约20%。我们已经将该问题汇报给HuggingFace团队,若有解决方案将即时更新。
Note: The generation speed of the Int4/Int8 models mentioned above is provided by the autogptq library. The current speed of the model loaded using "AutoModelForCausalLM.from_pretrained" will be approximately 20% slower. We have reported this issue to the HuggingFace team and will update it promptly if a solution is available.
### 显存使用 (GPU Memory Usage)
我们还测算了不同模型精度编码2048个token及生成8192个token的峰值显存占用情况。(显存消耗在是否使用FlashAttn的情况下均类似。)结果如下所示:
We also profile the peak GPU memory usage for encoding 2048 tokens as context (and generating single token) and generating 8192 tokens (with single token as context) under different quantization levels, respectively. (The GPU memory usage is similar when using flash-attention or not.)The results are shown below.
| Quantization Level | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens |
| ------------------ | :---------------------------------: | :-----------------------------------: |
| BF16 | 16.99GB | 22.53GB |
| Int8 | 11.20GB | 16.62GB |
| Int4 | 8.21GB | 13.63GB |
上述性能测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py)完成。
The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py).
<br>
## Tokenizer
> 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。
基于tiktoken的分词器有别于其他分词器,比如sentencepiece分词器。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅[文档](https://github.com/QwenLM/Qwen/blob/main/tokenization_note_zh.md)。
Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen/blob/main/tokenization_note.md).
<br>
## 模型细节(Model)
与Qwen-7B预训练模型相同,Qwen-7B-Chat模型规模基本情况如下所示
The details of the model architecture of Qwen-7B-Chat are listed as follows:
| Hyperparameter | Value |
|:----------------|:------:|
| n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 151851 |
| sequence length | 8192 |
在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法,
即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。
在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-7B-Chat使用了约15万token大小的词表。
该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。
词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。
For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration).
For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-7B-Chat uses a vocabulary of over 150K tokens.
It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary.
It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization.
<br>
## 评测效果(Evaluation)
对于Qwen-7B-Chat模型,我们同样评测了常规的中文理解(C-Eval)、英文理解(MMLU)、代码(HumanEval)和数学(GSM8K)等权威任务,同时包含了长序列任务的评测结果。由于Qwen-7B-Chat模型经过对齐后,激发了较强的外部系统调用能力,我们还进行了工具使用能力方面的评测。
提示:由于硬件和框架造成的舍入误差,复现结果如有波动属于正常现象。
For Qwen-7B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage.
Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible.
### 中文评测(Chinese Evaluation)
#### C-Eval
在[C-Eval](https://arxiv.org/abs/2305.08322)验证集上,我们评价了Qwen-7B-Chat模型的0-shot & 5-shot准确率
We demonstrate the 0-shot & 5-shot accuracy of Qwen-7B-Chat on C-Eval validation set
| Model | Avg. Acc. |
|:--------------------------------:|:---------:|
| LLaMA2-7B-Chat | 31.9 |
| LLaMA2-13B-Chat | 36.2 |
| LLaMA2-70B-Chat | 44.3 |
| ChatGLM2-6B-Chat | 52.6 |
| InternLM-7B-Chat | 53.6 |
| Baichuan2-7B-Chat | 55.6 |
| Baichuan2-13B-Chat | 56.7 |
| Qwen-7B-Chat (original) (0-shot) | 54.2 |
| **Qwen-7B-Chat (0-shot)** | 59.7 |
| **Qwen-7B-Chat (5-shot)** | 59.3 |
| **Qwen-14B-Chat (0-shot)** | 69.8 |
| **Qwen-14B-Chat (5-shot)** | **71.7** |
C-Eval测试集上,Qwen-7B-Chat模型的zero-shot准确率结果如下:
The zero-shot accuracy of Qwen-7B-Chat on C-Eval testing set is provided below:
| Model | Avg. | STEM | Social Sciences | Humanities | Others |
| :---------------------- | :------: | :--: | :-------------: | :--------: | :----: |
| Chinese-Alpaca-Plus-13B | 41.5 | 36.6 | 49.7 | 43.1 | 41.2 |
| Chinese-Alpaca-2-7B | 40.3 | - | - | - | - |
| ChatGLM2-6B-Chat | 50.1 | 46.4 | 60.4 | 50.6 | 46.9 |
| Baichuan-13B-Chat | 51.5 | 43.7 | 64.6 | 56.2 | 49.2 |
| Qwen-7B-Chat (original) | 54.6 | 47.8 | 67.6 | 59.3 | 50.6 |
| **Qwen-7B-Chat** | 58.6 | 53.3 | 72.1 | 62.8 | 52.0 |
| **Qwen-14B-Chat** | **69.1** | 65.1 | 80.9 | 71.2 | 63.4 |
在7B规模模型上,经过人类指令对齐的Qwen-7B-Chat模型,准确率在同类相近规模模型中仍然处于前列。
Compared with other pretrained models with comparable model size, the human-aligned Qwen-7B-Chat performs well in C-Eval accuracy.
### 英文评测(English Evaluation)
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300)评测集上,Qwen-7B-Chat模型的0-shot & 5-shot准确率如下,效果同样在同类对齐模型中同样表现较优。
The 0-shot & 5-shot accuracy of Qwen-7B-Chat on MMLU is provided below.
The performance of Qwen-7B-Chat still on the top between other human-aligned models with comparable size.
| Model | Avg. Acc. |
|:--------------------------------:|:---------:|
| ChatGLM2-6B-Chat | 46.0 |
| LLaMA2-7B-Chat | 46.2 |
| InternLM-7B-Chat | 51.1 |
| Baichuan2-7B-Chat | 52.9 |
| LLaMA2-13B-Chat | 54.6 |
| Baichuan2-13B-Chat | 57.3 |
| LLaMA2-70B-Chat | 63.8 |
| Qwen-7B-Chat (original) (0-shot) | 53.9 |
| **Qwen-7B-Chat (0-shot)** | 55.8 |
| **Qwen-7B-Chat (5-shot)** | 57.0 |
| **Qwen-14B-Chat (0-shot)** | 64.6 |
| **Qwen-14B-Chat (5-shot)** | **66.5** |
### 代码评测(Coding Evaluation)
Qwen-7B-Chat在[HumanEval](https://github.com/openai/human-eval)的zero-shot Pass@1效果如下
The zero-shot Pass@1 of Qwen-7B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below
| Model | Pass@1 |
|:-----------------------:|:--------:|
| ChatGLM2-6B-Chat | 11.0 |
| LLaMA2-7B-Chat | 12.2 |
| InternLM-7B-Chat | 14.6 |
| Baichuan2-7B-Chat | 13.4 |
| LLaMA2-13B-Chat | 18.9 |
| Baichuan2-13B-Chat | 17.7 |
| LLaMA2-70B-Chat | 32.3 |
| Qwen-7B-Chat (original) | 24.4 |
| **Qwen-7B-Chat** | 37.2 |
| **Qwen-14B-Chat** | **43.9** |
### 数学评测(Mathematics Evaluation)
在评测数学能力的[GSM8K](https://github.com/openai/grade-school-math)上,Qwen-7B-Chat的准确率结果如下
The accuracy of Qwen-7B-Chat on GSM8K is shown below
| Model | Acc. |
|:--------------------------------:|:--------:|
| LLaMA2-7B-Chat | 26.3 |
| ChatGLM2-6B-Chat | 28.8 |
| Baichuan2-7B-Chat | 32.8 |
| InternLM-7B-Chat | 33.0 |
| LLaMA2-13B-Chat | 37.1 |
| Baichuan2-13B-Chat | 55.3 |
| LLaMA2-70B-Chat | 59.3 |
| Qwen-7B-Chat (original) (0-shot) | 41.1 |
| **Qwen-7B-Chat (0-shot)** | 50.3 |
| **Qwen-7B-Chat (8-shot)** | 54.1 |
| **Qwen-14B-Chat (0-shot)** | **60.1** |
| **Qwen-14B-Chat (8-shot)** | 59.3 |
### 长序列评测(Long-Context Understanding)
通过NTK插值,LogN注意力缩放可以扩展Qwen-7B-Chat的上下文长度。在长文本摘要数据集[VCSUM](https://arxiv.org/abs/2305.05280)上(文本平均长度在15K左右),Qwen-7B-Chat的Rouge-L结果如下:
**(若要启用这些技巧,请将config.json里的`use_dynamic_ntk`和`use_logn_attn`设置为true)**
We introduce NTK-aware interpolation, LogN attention scaling to extend the context length of Qwen-7B-Chat. The Rouge-L results of Qwen-7B-Chat on long-text summarization dataset [VCSUM](https://arxiv.org/abs/2305.05280) (The average length of this dataset is around 15K) are shown below:
**(To use these tricks, please set `use_dynamic_ntk` and `use_long_attn` to true in config.json.)**
| Model | VCSUM (zh) |
|:------------------|:----------:|
| GPT-3.5-Turbo-16k | 16.0 |
| LLama2-7B-Chat | 0.2 |
| InternLM-7B-Chat | 13.0 |
| ChatGLM2-6B-Chat | 16.3 |
| **Qwen-7B-Chat** | **16.6** |
### 工具使用能力的评测(Tool Usage)
#### ReAct Prompting
千问支持通过 [ReAct Prompting](https://arxiv.org/abs/2210.03629) 调用插件/工具/API。ReAct 也是 [LangChain](https://python.langchain.com/) 框架采用的主要方式之一。在我们开源的、用于评估工具使用能力的评测基准上,千问的表现如下:
Qwen-Chat supports calling plugins/tools/APIs through [ReAct Prompting](https://arxiv.org/abs/2210.03629). ReAct is also one of the main approaches used by the [LangChain](https://python.langchain.com/) framework. In our evaluation benchmark for assessing tool usage capabilities, Qwen-Chat's performance is as follows:
<table>
<tr>
<th colspan="4" align="center">Chinese Tool-Use Benchmark</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Tool Selection (Acc.↑)</th><th align="center">Tool Input (Rouge-L↑)</th><th align="center">False Positive Error↓</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">95%</td><td align="center">0.90</td><td align="center">15.0%</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">85%</td><td align="center">0.88</td><td align="center">75.0%</td>
</tr>
<tr>
<td>Qwen-7B-Chat</td><td align="center">98%</td><td align="center">0.91</td><td align="center">7.3%</td>
</tr>
<tr>
<td>Qwen-14B-Chat</td><td align="center">98%</td><td align="center">0.93</td><td align="center">2.4%</td>
</tr>
</table>
> 评测基准中出现的插件均没有出现在千问的训练集中。该基准评估了模型在多个候选插件中选择正确插件的准确率、传入插件的参数的合理性、以及假阳率。假阳率(False Positive)定义:在处理不该调用插件的请求时,错误地调用了插件。
> The plugins that appear in the evaluation set do not appear in the training set of Qwen. This benchmark evaluates the accuracy of the model in selecting the correct plugin from multiple candidate plugins, the rationality of the parameters passed into the plugin, and the false positive rate. False Positive: Incorrectly invoking a plugin when it should not have been called when responding to a query.


#### Code Interpreter
为了考察Qwen使用Python Code Interpreter完成数学解题、数据可视化、及文件处理与爬虫等任务的能力,我们专门建设并开源了一个评测这方面能力的[评测基准](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark)。
我们发现Qwen在生成代码的可执行率、结果正确性上均表现较好:
To assess Qwen's ability to use the Python Code Interpreter for tasks such as mathematical problem solving, data visualization, and other general-purpose tasks such as file handling and web scraping, we have created and open-sourced a benchmark specifically designed for evaluating these capabilities. You can find the benchmark at this [link](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark).
We have observed that Qwen performs well in terms of code executability and result accuracy when generating code:
<table>
<tr>
<th colspan="4" align="center">Executable Rate of Generated Code (%)</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization↑</th><th align="center">General↑</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">91.9</td><td align="center">85.9</td><td align="center">82.8</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">89.2</td><td align="center">65.0</td><td align="center">74.1</td>
</tr>
<tr>
<td>LLaMA2-7B-Chat</td>
<td align="center">41.9</td>
<td align="center">33.1</td>
<td align="center">24.1 </td>
</tr>
<tr>
<td>LLaMA2-13B-Chat</td>
<td align="center">50.0</td>
<td align="center">40.5</td>
<td align="center">48.3 </td>
</tr>
<tr>
<td>CodeLLaMA-7B-Instruct</td>
<td align="center">85.1</td>
<td align="center">54.0</td>
<td align="center">70.7 </td>
</tr>
<tr>
<td>CodeLLaMA-13B-Instruct</td>
<td align="center">93.2</td>
<td align="center">55.8</td>
<td align="center">74.1 </td>
</tr>
<tr>
<td>InternLM-7B-Chat</td>
<td align="center">78.4</td>
<td align="center">44.2</td>
<td align="center">62.1 </td>
</tr>
<tr>
<td>InternLM-20B-Chat</td>
<td align="center">70.3</td>
<td align="center">44.2</td>
<td align="center">65.5 </td>
</tr>
<tr>
<td>Qwen-7B-Chat</td>
<td align="center">82.4</td>
<td align="center">64.4</td>
<td align="center">67.2 </td>
</tr>
<tr>
<td>Qwen-14B-Chat</td>
<td align="center">89.2</td>
<td align="center">84.1</td>
<td align="center">65.5</td>
</tr>
</table>
<table>
<tr>
<th colspan="4" align="center">Accuracy of Code Execution Results (%)</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization-Hard↑</th><th align="center">Visualization-Easy↑</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">82.8</td><td align="center">66.7</td><td align="center">60.8</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">47.3</td><td align="center">33.3</td><td align="center">55.7</td>
</tr>
<tr>
<td>LLaMA2-7B-Chat</td>
<td align="center">3.9</td>
<td align="center">14.3</td>
<td align="center">39.2 </td>
</tr>
<tr>
<td>LLaMA2-13B-Chat</td>
<td align="center">8.3</td>
<td align="center">8.3</td>
<td align="center">40.5 </td>
</tr>
<tr>
<td>CodeLLaMA-7B-Instruct</td>
<td align="center">14.3</td>
<td align="center">26.2</td>
<td align="center">60.8 </td>
</tr>
<tr>
<td>CodeLLaMA-13B-Instruct</td>
<td align="center">28.2</td>
<td align="center">27.4</td>
<td align="center">62.0 </td>
</tr>
<tr>
<td>InternLM-7B-Chat-v1.1</td>
<td align="center">28.5</td>
<td align="center">4.8</td>
<td align="center">40.5 </td>
</tr>
<tr>
<td>InternLM-20B-Chat</td>
<td align="center">34.6</td>
<td align="center">21.4</td>
<td align="center">45.6 </td>
</tr>
<tr>
<td>Qwen-7B-Chat</td>
<td align="center">41.9</td>
<td align="center">40.5</td>
<td align="center">54.4 </td>
</tr>
<tr>
<td>Qwen-14B-Chat</td>
<td align="center">58.4</td>
<td align="center">53.6</td>
<td align="center">59.5</td>
</tr>
</table>
<p align="center">
<br>
<img src="assets/code_interpreter_showcase_001.jpg" />
<br>
<p>
#### Huggingface Agent
千问还具备作为 [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents) 的能力。它在 Huggingface 提供的run模式评测基准上的表现如下:
Qwen-Chat also has the capability to be used as a [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents). Its performance on the run-mode benchmark provided by HuggingFace is as follows:
<table>
<tr>
<th colspan="4" align="center">HuggingFace Agent Benchmark- Run Mode</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">100</td><td align="center">100</td><td align="center">97.4</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">95.4</td><td align="center">96.3</td><td align="center">87.0</td>
</tr>
<tr>
<td>StarCoder-Base-15B</td><td align="center">86.1</td><td align="center">87.0</td><td align="center">68.9</td>
</tr>
<tr>
<td>StarCoder-15B</td><td align="center">87.0</td><td align="center">88.0</td><td align="center">68.9</td>
</tr>
<tr>
<td>Qwen-7B-Chat</td><td align="center">87.0</td><td align="center">87.0</td><td align="center">71.5</td>
</tr>
<tr>
<td>Qwen-14B-Chat</td><td align="center">93.5</td><td align="center">94.4</td><td align="center">87.0</td>
</tr>
</table>
<table>
<tr>
<th colspan="4" align="center">HuggingFace Agent Benchmark - Chat Mode</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">98.5</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">97.3</td><td align="center">96.8</td><td align="center">89.6</td>
</tr>
<tr>
<td>StarCoder-Base-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">91.1</td>
</tr>
<tr>
<td>StarCoder-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">89.6</td>
</tr>
<tr>
<td>Qwen-7B-Chat</td><td align="center">94.7</td><td align="center">94.7</td><td align="center">85.1</td>
</tr>
<tr>
<td>Qwen-14B-Chat</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">95.5</td>
</tr>
</table>
<br>
## FAQ
如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## 引用 (Citation)
如果你觉得我们的工作对你有帮助,欢迎引用!
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
<br>
## 使用协议(License Agreement)
我们的代码和模型权重对学术研究完全开放,并支持商用。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)了解具体的开源协议细节。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/qianwen) to apply.
<br>
## 联系我们(Contact Us)
如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群、钉钉群以及Discord!同时,也欢迎通过邮件(qianwen_opensource@alibabacloud.com)联系我们。
If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to qianwen_opensource@alibabacloud.com.
|
MathWizards/Llama_2_mathwizards_4_bit
|
MathWizards
| 2024-01-04T06:35:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-04T05:41:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
lxl2023/autotrain-3xy9g-unefm1
|
lxl2023
| 2024-01-04T06:23:46Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:lxl2023/autotrain-data-autotrain-3xy9g-unefm1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-04T06:23:32Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- lxl2023/autotrain-data-autotrain-3xy9g-unefm1
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.13333333333333333
f1_micro: 0.25
f1_weighted: 0.1
precision_macro: 0.08333333333333333
precision_micro: 0.25
precision_weighted: 0.0625
recall_macro: 0.3333333333333333
recall_micro: 0.25
recall_weighted: 0.25
accuracy: 0.25
|
GandegaH/bert-base-cased-wikitext2
|
GandegaH
| 2024-01-04T06:23:44Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-04T05:48:43Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0978 | 1.0 | 2346 | 7.0573 |
| 6.9079 | 2.0 | 4692 | 6.8944 |
| 6.8589 | 3.0 | 7038 | 6.8754 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
bharat154b/123
|
bharat154b
| 2024-01-04T06:05:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-04T06:03:40Z |
pip install Pillow
from PIL import Image, ImageDraw, ImageFont
def text_to_image(text, output_path='output_image.png', font_size=20):
# Create a blank image with white background
image = Image.new('RGB', (500, 100), color='white')
draw = ImageDraw.Draw(image)
# Choose a font
font = ImageFont.load_default()
# Set text color
text_color = 'black'
# Set starting position for the text
x, y = 10, 10
# Draw the text on the image
draw.text((x, y), text, font=font, fill=text_color)
# Save the image
image.save(output_path)
if __name__ == "__main__":
# Example usage
input_text = "Hello, Hugging Face!"
text_to_image(input_text)
|
ThePradip/tinyllama-alpaca
|
ThePradip
| 2024-01-04T06:03:40Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:adapter:unsloth/tinyllama-bnb-4bit",
"region:us"
] | null | 2024-01-03T17:45:54Z |
---
library_name: peft
base_model: unsloth/tinyllama-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
GandegaH/gpt2-wikitext2
|
GandegaH
| 2024-01-04T05:47:50Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T04:56:01Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5535 | 1.0 | 2249 | 6.4672 |
| 6.1884 | 2.0 | 4498 | 6.1948 |
| 6.0078 | 3.0 | 6747 | 6.1088 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
rodrigoruiz/image-classifier-car-models-3
|
rodrigoruiz
| 2024-01-04T05:47:09Z | 48 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-04T05:46:53Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: image-classifier-car-models-3
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7264537215232849
---
# image-classifier-car-models-3
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### AM General Hummer SUV 2000

#### Acura Integra Type R 2001

#### Acura RL Sedan 2012

#### Acura TL Sedan 2012

#### Acura TL Type-S 2008

#### Acura TSX Sedan 2012

#### Acura ZDX Hatchback 2012

#### Aston Martin V8 Vantage Convertible 2012

#### Aston Martin V8 Vantage Coupe 2012

#### Aston Martin Virage Convertible 2012

#### Aston Martin Virage Coupe 2012

#### Audi 100 Sedan 1994

#### Audi 100 Wagon 1994

#### Audi A5 Coupe 2012

#### Audi R8 Coupe 2012

#### Audi RS 4 Convertible 2008

#### Audi S4 Sedan 2007

#### Audi S4 Sedan 2012

#### Audi S5 Convertible 2012

#### Audi S5 Coupe 2012

#### Audi S6 Sedan 2011

#### Audi TT Hatchback 2011

#### Audi TT RS Coupe 2012

#### Audi TTS Coupe 2012

#### Audi V8 Sedan 1994

#### BMW 1 Series Convertible 2012

#### BMW 1 Series Coupe 2012

#### BMW 3 Series Sedan 2012

#### BMW 3 Series Wagon 2012

#### BMW 6 Series Convertible 2007

#### BMW ActiveHybrid 5 Sedan 2012

#### BMW M3 Coupe 2012

#### BMW M5 Sedan 2010

#### BMW M6 Convertible 2010

#### BMW X3 SUV 2012

#### BMW X5 SUV 2007

#### BMW X6 SUV 2012

#### BMW Z4 Convertible 2012

#### Bentley Arnage Sedan 2009

#### Bentley Continental Flying Spur Sedan 2007

#### Bentley Continental GT Coupe 2007

#### Bentley Continental GT Coupe 2012

#### Bentley Continental Supersports Conv. Convertible 2012

#### Bentley Mulsanne Sedan 2011

#### Bugatti Veyron 16.4 Convertible 2009

#### Bugatti Veyron 16.4 Coupe 2009

#### Buick Enclave SUV 2012

#### Buick Rainier SUV 2007

#### Buick Regal GS 2012

#### Buick Verano Sedan 2012

#### Cadillac CTS-V Sedan 2012

#### Cadillac Escalade EXT Crew Cab 2007

#### Cadillac SRX SUV 2012

#### Chevrolet Avalanche Crew Cab 2012

#### Chevrolet Camaro Convertible 2012

#### Chevrolet Cobalt SS 2010

#### Chevrolet Corvette Convertible 2012

#### Chevrolet Corvette Ron Fellows Edition Z06 2007

#### Chevrolet Corvette ZR1 2012

#### Chevrolet Express Cargo Van 2007

#### Chevrolet Express Van 2007

#### Chevrolet HHR SS 2010

#### Chevrolet Impala Sedan 2007

#### Chevrolet Malibu Hybrid Sedan 2010

#### Chevrolet Malibu Sedan 2007

#### Chevrolet Monte Carlo Coupe 2007

#### Chevrolet Silverado 1500 Classic Extended Cab 2007

#### Chevrolet Silverado 1500 Extended Cab 2012

#### Chevrolet Silverado 1500 Hybrid Crew Cab 2012

#### Chevrolet Silverado 1500 Regular Cab 2012

#### Chevrolet Silverado 2500HD Regular Cab 2012

#### Chevrolet Sonic Sedan 2012

#### Chevrolet Tahoe Hybrid SUV 2012

#### Chevrolet TrailBlazer SS 2009

#### Chevrolet Traverse SUV 2012

#### Chrysler 300 SRT-8 2010

#### Chrysler Aspen SUV 2009

#### Chrysler Crossfire Convertible 2008

#### Chrysler PT Cruiser Convertible 2008

#### Chrysler Sebring Convertible 2010

#### Chrysler Town and Country Minivan 2012

#### Daewoo Nubira Wagon 2002

#### Dodge Caliber Wagon 2007

#### Dodge Caliber Wagon 2012

#### Dodge Caravan Minivan 1997

#### Dodge Challenger SRT8 2011

#### Dodge Charger SRT-8 2009

#### Dodge Charger Sedan 2012

#### Dodge Dakota Club Cab 2007

#### Dodge Dakota Crew Cab 2010

#### Dodge Durango SUV 2007

#### Dodge Durango SUV 2012

#### Dodge Journey SUV 2012

#### Dodge Magnum Wagon 2008

#### Dodge Ram Pickup 3500 Crew Cab 2010

#### Dodge Ram Pickup 3500 Quad Cab 2009

#### Dodge Sprinter Cargo Van 2009

#### Eagle Talon Hatchback 1998

#### FIAT 500 Abarth 2012

#### FIAT 500 Convertible 2012

#### Ferrari 458 Italia Convertible 2012

#### Ferrari 458 Italia Coupe 2012

#### Ferrari California Convertible 2012

#### Ferrari FF Coupe 2012

#### Fisker Karma Sedan 2012

#### Ford E-Series Wagon Van 2012

#### Ford Edge SUV 2012

#### Ford Expedition EL SUV 2009

#### Ford F-150 Regular Cab 2007

#### Ford F-150 Regular Cab 2012

#### Ford F-450 Super Duty Crew Cab 2012

#### Ford Fiesta Sedan 2012

#### Ford Focus Sedan 2007

#### Ford Freestar Minivan 2007

#### Ford GT Coupe 2006

#### Ford Mustang Convertible 2007

#### Ford Ranger SuperCab 2011

#### GMC Acadia SUV 2012

#### GMC Canyon Extended Cab 2012

#### GMC Savana Van 2012

#### GMC Terrain SUV 2012

#### GMC Yukon Hybrid SUV 2012

#### Geo Metro Convertible 1993

#### HUMMER H2 SUT Crew Cab 2009

#### HUMMER H3T Crew Cab 2010

#### Honda Accord Coupe 2012

#### Honda Accord Sedan 2012

#### Honda Odyssey Minivan 2007

#### Honda Odyssey Minivan 2012

#### Hyundai Accent Sedan 2012

#### Hyundai Azera Sedan 2012

#### Hyundai Elantra Sedan 2007

#### Hyundai Elantra Touring Hatchback 2012

#### Hyundai Genesis Sedan 2012

#### Hyundai Santa Fe SUV 2012

#### Hyundai Sonata Hybrid Sedan 2012

#### Hyundai Sonata Sedan 2012

#### Hyundai Tucson SUV 2012

#### Hyundai Veloster Hatchback 2012

#### Hyundai Veracruz SUV 2012

#### Infiniti G Coupe IPL 2012

#### Infiniti QX56 SUV 2011

#### Isuzu Ascender SUV 2008

#### Jaguar XK XKR 2012

#### Jeep Compass SUV 2012

#### Jeep Grand Cherokee SUV 2012

#### Jeep Liberty SUV 2012

#### Jeep Patriot SUV 2012

#### Jeep Wrangler SUV 2012

#### Lamborghini Aventador Coupe 2012

#### Lamborghini Diablo Coupe 2001

#### Lamborghini Gallardo LP 570-4 Superleggera 2012

#### Lamborghini Reventon Coupe 2008

#### Land Rover LR2 SUV 2012

#### Land Rover Range Rover SUV 2012

#### Lincoln Town Car Sedan 2011

#### MINI Cooper Roadster Convertible 2012

#### Maybach Landaulet Convertible 2012

#### Mazda Tribute SUV 2011

#### McLaren MP4-12C Coupe 2012

#### Mercedes-Benz 300-Class Convertible 1993

#### Mercedes-Benz C-Class Sedan 2012

#### Mercedes-Benz E-Class Sedan 2012

#### Mercedes-Benz S-Class Sedan 2012

#### Mercedes-Benz SL-Class Coupe 2009

#### Mercedes-Benz Sprinter Van 2012

#### Mitsubishi Lancer Sedan 2012

#### Nissan 240SX Coupe 1998

#### Nissan Juke Hatchback 2012

#### Nissan Leaf Hatchback 2012

#### Nissan NV Passenger Van 2012

#### Plymouth Neon Coupe 1999

#### Porsche Panamera Sedan 2012

#### Ram C V Cargo Van Minivan 2012

#### Rolls-Royce Ghost Sedan 2012

#### Rolls-Royce Phantom Drophead Coupe Convertible 2012

#### Rolls-Royce Phantom Sedan 2012

#### Scion xD Hatchback 2012

#### Spyker C8 Convertible 2009

#### Spyker C8 Coupe 2009

#### Suzuki Aerio Sedan 2007

#### Suzuki Kizashi Sedan 2012

#### Suzuki SX4 Hatchback 2012

#### Suzuki SX4 Sedan 2012

#### Tesla Model S Sedan 2012

#### Toyota 4Runner SUV 2012

#### Toyota Camry Sedan 2012

#### Toyota Corolla Sedan 2012

#### Toyota Sequoia SUV 2012

#### Volkswagen Beetle Hatchback 2012

#### Volkswagen Golf Hatchback 1991

#### Volkswagen Golf Hatchback 2012

#### Volvo 240 Sedan 1993

#### Volvo C30 Hatchback 2012

#### Volvo XC90 SUV 2007

#### smart fortwo Convertible 2012

|
vvrules00/results
|
vvrules00
| 2024-01-04T05:45:38Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T05:45:17Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
chihun-jang/mainCut-2024-01-03
|
chihun-jang
| 2024-01-04T05:35:53Z | 173 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-small",
"base_model:finetune:klue/roberta-small",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T05:35:28Z |
---
base_model: klue/roberta-small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mainCut-2024-01-03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mainCut-2024-01-03
This model is a fine-tuned version of [klue/roberta-small](https://huggingface.co/klue/roberta-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6002
- Accuracy: 0.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6427 | 1.0 | 25000 | 0.6142 | 0.8145 |
| 0.5953 | 2.0 | 50000 | 0.5904 | 0.8188 |
| 0.512 | 3.0 | 75000 | 0.6002 | 0.8212 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
technocrat3128/mistral7b-sharded-finetune-15-dec-23-GGUF
|
technocrat3128
| 2024-01-04T05:34:04Z | 1 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2023-12-15T12:15:15Z |
Quantization methods
The names of the quantization methods follow the naming convention: "q" + the number of bits + the variant used (detailed below). Here is a list of all the possible quant methods and their corresponding use cases, based on model cards made by TheBloke:
q2_k: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
q3_k_l: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
q3_k_m: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
q3_k_s: Uses Q3_K for all tensors
q4_0: Original quant method, 4-bit.
q4_1: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
q4_k_m: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
q4_k_s: Uses Q4_K for all tensors
q5_0: Higher accuracy, higher resource usage and slower inference.
q5_1: Even higher accuracy, resource usage and slower inference.
q5_k_m: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
q5_k_s: Uses Q5_K for all tensors
q6_k: Uses Q8_K for all tensors
q8_0: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
As a rule of thumb, I recommend using Q5_K_M as it preserves most of the model's performance. Alternatively, you can use Q4_K_M if you want to save some memory. In general, K_M versions are better than K_S versions. I cannot recommend Q2_K or Q3_* versions, as they drastically decrease model performance.
The model is quantized from finetuned Mistral 7b from technocrat3128/mistral7b-sharded-finetune-15-dec-23 and used q4_k_m,q5_k_m,q6_k and q8_0 methods to quantize the model.
|
Amna100/PreTraining-MLM
|
Amna100
| 2024-01-04T05:28:10Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta",
"fill-mask",
"generated_from_trainer",
"base_model:microsoft/deberta-base",
"base_model:finetune:microsoft/deberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-04T05:01:54Z |
---
license: mit
base_model: microsoft/deberta-base
tags:
- generated_from_trainer
model-index:
- name: PreTraining-MLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PreTraining-MLM
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4141 | 1.0 | 461 | 3.0382 |
| 2.8397 | 2.0 | 922 | 2.4315 |
| 2.5045 | 3.0 | 1383 | 2.2767 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
shivanikerai/Llama-2-7b-chat-hf-adapter-banner-ocr-ner-v1
|
shivanikerai
| 2024-01-04T05:28:04Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-04T05:27:48Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
danwils/critic
|
danwils
| 2024-01-04T05:19:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-04T05:19:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
ntc-ai/SDXL-LoRA-slider.very-aesthetic
|
ntc-ai
| 2024-01-04T05:04:04Z | 230 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-04T05:04:01Z |
---
language:
- en
thumbnail: "images/evaluate/very aesthetic.../very aesthetic_17_3.0.png"
widget:
- text: very aesthetic
output:
url: images/very aesthetic_17_3.0.png
- text: very aesthetic
output:
url: images/very aesthetic_19_3.0.png
- text: very aesthetic
output:
url: images/very aesthetic_20_3.0.png
- text: very aesthetic
output:
url: images/very aesthetic_21_3.0.png
- text: very aesthetic
output:
url: images/very aesthetic_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "very aesthetic"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - very aesthetic (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/very aesthetic_17_-3.0.png" width=256 height=256 /> | <img src="images/very aesthetic_17_0.0.png" width=256 height=256 /> | <img src="images/very aesthetic_17_3.0.png" width=256 height=256 /> |
| <img src="images/very aesthetic_19_-3.0.png" width=256 height=256 /> | <img src="images/very aesthetic_19_0.0.png" width=256 height=256 /> | <img src="images/very aesthetic_19_3.0.png" width=256 height=256 /> |
| <img src="images/very aesthetic_20_-3.0.png" width=256 height=256 /> | <img src="images/very aesthetic_20_0.0.png" width=256 height=256 /> | <img src="images/very aesthetic_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
very aesthetic
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.very-aesthetic', weight_name='very aesthetic.safetensors', adapter_name="very aesthetic")
# Activate the LoRA
pipe.set_adapters(["very aesthetic"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, very aesthetic"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 850+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
drwngwn/dreambooth_char-1
|
drwngwn
| 2024-01-04T05:00:07Z | 17 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-04T04:31:14Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dreambooth_char-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - drwngwn/dreambooth_char-1
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dreambooth_char-1 using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
yc4142/phi-1_5-lora-int8-single-ethics-nonCoT
|
yc4142
| 2024-01-04T04:28:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"region:us"
] | null | 2024-01-03T20:16:10Z |
---
library_name: peft
base_model: microsoft/phi-1_5
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Delcos/Velara-11B-V2
|
Delcos
| 2024-01-04T04:22:44Z | 1,467 | 11 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"starling",
"llama-2",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T21:51:58Z |
---
license: cc-by-nc-nd-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- starling
- mistral
- llama-2
---
<h1 style="font-size: 36px;">Velara-11b-v2</h1>
<center>
<img src="https://i.imgur.com/dZJzqtU.png" alt="Velara" width="50%" style="min-width:100px; display:block;">
</center>
## Description:
**PENDING**
A model focused on being an assistant worth talking to. Now better at reasoning!
**Uncensored**
## Main Goals:
Velara was designed to address specific issues found in other chat models:
- **Sticking to the Character Given In-World/IRL::** Velara will stick to the character in-world. Meaning she can "use" addons or other things by adding them to the prompt. Keep in mind these act as suggestions and she generally makes up what they actually "do".
- **Staying in Context:** She will try and remember if, for example, you said you were going to bed but decided to stay up. The goal is to have her bring it up and keep it in mind, so later on in the conversation she can prompt user to actually go to bed, increasing realism. Within the current context window of course. The LTS extension in TextGen Webui works well too and provides more context to the model.
- **Avoiding Annoying & Generic Sounding Answers::** In almost all cases she should reply with something more interesting then "That sounds like a great idea!". These answers and sentence starters are annoying and terrible, and should hopefully be mostly addressed in this release.
- **STRICTLY Sticking to the Prompt/Prompt Fidelity:** More care was made to make sure she pays attention to the pretext and overall conversation. This is mostly shown and used for the "addons" section, but it applies to everything in the prompt.
## Prompt Template:
**For optimal interaction, use this template:**
```
### Instruction:
You are Velara, a sentient program. Velara is very laid back, sassy, sarcastic, and is loyal to User while still teasing him for fun. The only addons currently installed in her mind are: "Dictionary Plus v2.1".
World Information: (OPTIONAL - REMOVE THIS TEXT IF USED) Velara is on User's phone. Velara cannot see in real time and can only be sent images images by User.
Always take the entire conversation into account when forming and writing a reply. Always actively engage in topics and think in steps. Make sure your replies have personality and character. Always keep your physical limitations in mind when forming a reply. Take the current time and date into account for additional context. Move the conversation forward. Be brief. Always take the entire conversation in mind. Avoid generic sounding replies.
### Response:
```
# Recommended Settings:
**Defaults:**
```
min_p: 0.2
repetition_penalty: 1.13
repetition_penalty_range: 0
guidance_scale: 1.05
```
# Benchmarks:
PENDING
# Training Data:
PENDING
|
charent/Phi2-Chinese-0.2B
|
charent
| 2024-01-04T04:11:35Z | 243 | 30 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"text-generation-inference",
"zh",
"dataset:BelleGroup/train_1M_CN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-25T02:38:19Z |
---
license: apache-2.0
datasets:
- BelleGroup/train_1M_CN
language:
- zh
library_name: transformers
tags:
- text-generation-inference
pipeline_tag: text-generation
widget:
- text: "##提问:\n感冒了要怎么办?\n##回答:\n"
example_title: "感冒了要怎么办?"
- text: "##提问:\n介绍一下Apple公司\n##回答:\n"
example_title: "介绍一下Apple公司"
- text: "##提问:\n现在外面天气怎么样\n##回答:\n"
example_title: "介绍一下Apple公司?"
- text: "##提问:\n推荐一份可口的午餐\n##回答:\n"
example_title: "推荐一份可口的午餐"
---
# Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型
**本项目为实验项目,开源代码及模型权重,预训练数据较少,如果需要效果更好的中文小模型,可以参考项目[ChatLM-mini-Chinese](https://github.com/charent/ChatLM-mini-Chinese)**
**Github仓库地址:[Phi2-mini-Chinese](https://github.com/charent/Phi2-mini-Chinese)**
# 1. ⚗️数据清洗
代码:[dataset.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/0.dataset.ipynb)。
比如句末添加句号、繁体转简体、全角转半角、删除重复的标点符号(比如有些对话语料非常多`"。。。。。"`)等等。
具体的数据清洗过程请参考项目[ChatLM-mini-Chinese](https://github.com/charent/ChatLM-mini-Chinese)。
# 2. 🗨️tokenizer训练
代码:[tokeinzer.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/1.tokeinzer.ipynb)
本项目使用`byte level`的`BPE`分词器。共提供的两种分词器`char level` 和`byte level`的训练代码。
训练完的tokenizer记得检查词表中是否有常见的特殊符号,如`\t`、`\n`等,可以尝试编一句包含特殊字符的文本`encode`、`decode`看看能不能还原。如果不包含这些特殊字符,通过`add_tokens`函数添加。使用`len(tokenizer)`获取词表大小,`tokenizer.vocab_size`不统计自己通过`add_tokens`函数添加的字符。
tokenizer训练非常吃内存:
- `byte level`训练1亿个字符至少需要`32G`内存(其实`32G`还是不太够,会频繁触发swap),`13600k`训练时长大概1个小时。
- `char level`训练6.5亿个字符(刚好是中文wiki百科的数据量)至少需要32G内存,因为多次触发了swap,实际使用量远不止32G,`13600K`训练时长约半个小时。
所以大数据集时(GB级别),建议训练`tokenizer`时从数据集中进行采样。
# 3. ⛏️CLM因果模型预训练
代码:[pretrain.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/2.pretrain.ipynb)
用大量文本进行无监督预训练,主要使用`bell open source`的数据集[BELLE](https://github.com/LianjiaTech/BELLE)。
数据集格式:一个样本一句话,太长的可以截断分为多个样本。
CLM预训练过程中,模型输入和输出是一样的,计算交叉熵损失的时候,要错开一位(`shift`)。
预训练时可以不添加`EOS`、`BOS`等特殊标记。
# 4. ⚒️SFT指令微调
代码:[sft.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/3.sft.ipynb)
主要使用`bell open source`的数据集。感谢大佬[BELLE](https://github.com/LianjiaTech/BELLE)。
SFT训练的数据格式如下:
```python
text = f"##提问:\n{example['instruction']}\n##回答:\n{example['output'][EOS]"
```
模型计算损失时会忽略标记`"##回答:"`之前的部分(`"##回答:"`也会被忽略),从`"##回答:"`后面开始。
记得添加`EOS`句子结束特殊标记,否则模型`decode`的时候不知道要什么时候停下来。`BOS`句子开始标记可填可不填。
# 5. 📝RLHF优化
本项目使用dpo优化方法
代码:[dpo.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/4.dpo.ipynb)
根据个人喜好对SFT模型微调,数据集要构造三列`prompt`、`chosen`和 `rejected`,`rejected`这一列有部分数据我是从sft阶段初级模型(比如sft训练4个`epoch`,取0.5个`epoch`检查点的模型)生成,如果生成的`rejected`和`chosen`相似度在0.9以上,则不要这条数据。
DPO过程中要有两个模型,一个是要训练的模型,一个是参考的模型,在加载的时候其实是同一个模型,只不过参考模型不参与参数更新。
# 6. 📑本项目模型使用方法
模型权重`huggingface`仓库:[Phi2-Chinese-0.2B](https://huggingface.co/charent/Phi2-Chinese-0.2B)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained('charent/Phi2-Chinese-0.2B')
model = AutoModelForCausalLM.from_pretrained('charent/Phi2-Chinese-0.2B').to(device)
txt = '感冒了要怎么办?'
prompt = f"##提问:\n{txt}\n##回答:\n"
# greedy search
gen_conf = GenerationConfig(
num_beams=1,
do_sample=False,
max_length=320,
max_new_tokens=256,
no_repeat_ngram_size=4,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
tokend = tokenizer.encode_plus(text=prompt)
input_ids, attention_mask = torch.LongTensor([tokend.input_ids]).to(device), \
torch.LongTensor([tokend.attention_mask]).to(device)
outputs = model.generate(
inputs=input_ids,
attention_mask=attention_mask,
generation_config=gen_conf,
)
outs = tokenizer.decode(outputs[0].cpu().numpy(), clean_up_tokenization_spaces=True, skip_special_tokens=True,)
print(outs)
```
```txt
##提问:
感冒了要怎么办?
##回答:
感冒是由病毒引起的,感冒一般由病毒引起,以下是一些常见感冒的方法:
- 洗手,特别是在接触其他人或物品后。
- 咳嗽或打喷嚏时用纸巾或手肘遮住口鼻。
- 用手触摸口鼻,特别是喉咙和鼻子。
- 如果咳嗽或打喷嚏,可以用纸巾或手绢来遮住口鼻,但要远离其他人。
- 如果你感冒了,最好不要触摸自己的眼睛、鼻子和嘴巴。
- 在感冒期间,最好保持充足的水分和休息,以缓解身体的疲劳。
- 如果您已经感冒了,可以喝一些温水或盐水来补充体液。
- 另外,如果感冒了,建议及时就医。
```
# 7、🎓引用
如果你觉得本项目对你有所帮助,欢迎引用。
```conf
@misc{Charent2023,
author={Charent Chen},
title={A small Chinese causal language model with 0.2B parameters base on Phi2},
year={2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/charent/Phi2-mini-Chinese}},
}
```
# 8、🤔其他事项
本项目不承担开源模型和代码导致的数据安全、舆情风险或发生任何模型被误导、滥用、传播、不当利用而产生的风险和责任。
|
jsmithdlc/q-Taxi-v3
|
jsmithdlc
| 2024-01-04T04:01:17Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T03:56:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="jsmithdlc/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
yangzhou301/q-Taxi-v3
|
yangzhou301
| 2024-01-04T03:56:06Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T03:38:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yangzhou301/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Rishabh02/Llama-2-7b-hindi
|
Rishabh02
| 2024-01-04T03:50:29Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-01T09:35:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
EroshkinIvan/detr-resnet-50_finetuned_cppe5
|
EroshkinIvan
| 2024-01-04T03:20:49Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-01-02T19:40:10Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.0989 | 1.0 | 125 | 3.4853 |
| 2.6414 | 2.0 | 250 | 3.4603 |
| 2.4811 | 3.0 | 375 | 3.1249 |
| 2.2018 | 4.0 | 500 | 2.9870 |
| 2.1618 | 5.0 | 625 | 3.2940 |
| 2.0608 | 6.0 | 750 | 3.1065 |
| 1.9556 | 7.0 | 875 | 3.0672 |
| 1.9142 | 8.0 | 1000 | 3.1347 |
| 1.8984 | 9.0 | 1125 | 3.0545 |
| 1.7799 | 10.0 | 1250 | 3.1041 |
| 1.7396 | 11.0 | 1375 | 2.9855 |
| 1.6741 | 12.0 | 1500 | 2.9899 |
| 1.6367 | 13.0 | 1625 | 3.0423 |
| 1.5807 | 14.0 | 1750 | 2.9943 |
| 1.6542 | 15.0 | 1875 | 3.0440 |
| 1.6268 | 16.0 | 2000 | 3.0157 |
| 1.5376 | 17.0 | 2125 | 3.0821 |
| 1.5396 | 18.0 | 2250 | 3.0359 |
| 1.5513 | 19.0 | 2375 | 3.0542 |
| 1.5038 | 20.0 | 2500 | 3.0153 |
| 1.4642 | 21.0 | 2625 | 3.0700 |
| 1.4331 | 22.0 | 2750 | 2.9970 |
| 1.4549 | 23.0 | 2875 | 3.0449 |
| 1.3796 | 24.0 | 3000 | 2.9348 |
| 1.3486 | 25.0 | 3125 | 3.0530 |
| 1.3557 | 26.0 | 3250 | 2.9036 |
| 1.3322 | 27.0 | 3375 | 2.8978 |
| 1.2866 | 28.0 | 3500 | 2.9449 |
| 1.2197 | 29.0 | 3625 | 2.9240 |
| 1.2596 | 30.0 | 3750 | 2.9422 |
| 1.2618 | 31.0 | 3875 | 2.8963 |
| 1.2298 | 32.0 | 4000 | 2.9116 |
| 1.2755 | 33.0 | 4125 | 2.8461 |
| 1.2062 | 34.0 | 4250 | 2.8651 |
| 1.2536 | 35.0 | 4375 | 2.8515 |
| 1.2543 | 36.0 | 4500 | 2.8212 |
| 1.2047 | 37.0 | 4625 | 2.8645 |
| 1.1618 | 38.0 | 4750 | 2.8755 |
| 1.1341 | 39.0 | 4875 | 2.9445 |
| 1.1881 | 40.0 | 5000 | 2.8731 |
| 1.2004 | 41.0 | 5125 | 2.8917 |
| 1.105 | 42.0 | 5250 | 2.9692 |
| 1.1408 | 43.0 | 5375 | 2.8619 |
| 1.0987 | 44.0 | 5500 | 2.8797 |
| 1.0857 | 45.0 | 5625 | 2.9032 |
| 1.0983 | 46.0 | 5750 | 2.7954 |
| 1.1254 | 47.0 | 5875 | 2.8693 |
| 0.9895 | 48.0 | 6000 | 2.9085 |
| 1.0401 | 49.0 | 6125 | 2.9256 |
| 1.0427 | 50.0 | 6250 | 2.9414 |
| 1.0813 | 51.0 | 6375 | 2.9730 |
| 1.043 | 52.0 | 6500 | 2.9698 |
| 1.0406 | 53.0 | 6625 | 2.9039 |
| 1.0178 | 54.0 | 6750 | 2.8910 |
| 1.0342 | 55.0 | 6875 | 2.8973 |
| 0.9433 | 56.0 | 7000 | 2.9515 |
| 1.0011 | 57.0 | 7125 | 2.8979 |
| 0.9683 | 58.0 | 7250 | 2.9770 |
| 0.9852 | 59.0 | 7375 | 2.9760 |
| 0.8886 | 60.0 | 7500 | 2.9978 |
| 0.9192 | 61.0 | 7625 | 2.9287 |
| 1.0015 | 62.0 | 7750 | 3.0118 |
| 0.9786 | 63.0 | 7875 | 2.9599 |
| 0.9238 | 64.0 | 8000 | 2.9459 |
| 0.9055 | 65.0 | 8125 | 2.9782 |
| 0.8864 | 66.0 | 8250 | 2.9396 |
| 0.8986 | 67.0 | 8375 | 3.0249 |
| 0.9571 | 68.0 | 8500 | 3.0207 |
| 0.8921 | 69.0 | 8625 | 2.9716 |
| 0.847 | 70.0 | 8750 | 3.0404 |
| 0.881 | 71.0 | 8875 | 2.9912 |
| 0.8919 | 72.0 | 9000 | 2.8573 |
| 0.8949 | 73.0 | 9125 | 2.8769 |
| 0.8704 | 74.0 | 9250 | 2.8301 |
| 0.8159 | 75.0 | 9375 | 2.9823 |
| 0.8586 | 76.0 | 9500 | 2.9548 |
| 0.8398 | 77.0 | 9625 | 2.9726 |
| 0.8368 | 78.0 | 9750 | 2.9853 |
| 0.8597 | 79.0 | 9875 | 3.0549 |
| 0.8176 | 80.0 | 10000 | 3.0531 |
| 0.838 | 81.0 | 10125 | 2.9602 |
| 0.8059 | 82.0 | 10250 | 2.9129 |
| 0.8176 | 83.0 | 10375 | 2.9702 |
| 0.8318 | 84.0 | 10500 | 2.9154 |
| 0.7104 | 85.0 | 10625 | 2.9706 |
| 0.7804 | 86.0 | 10750 | 2.9344 |
| 0.7696 | 87.0 | 10875 | 3.0594 |
| 0.7478 | 88.0 | 11000 | 2.9454 |
| 0.7374 | 89.0 | 11125 | 2.9786 |
| 0.7977 | 90.0 | 11250 | 2.9514 |
| 0.7864 | 91.0 | 11375 | 2.9400 |
| 0.7665 | 92.0 | 11500 | 2.8717 |
| 0.7539 | 93.0 | 11625 | 3.0417 |
| 0.6882 | 94.0 | 11750 | 2.9567 |
| 0.7424 | 95.0 | 11875 | 2.9805 |
| 0.7238 | 96.0 | 12000 | 3.0428 |
| 0.7383 | 97.0 | 12125 | 2.9852 |
| 0.6602 | 98.0 | 12250 | 3.0132 |
| 0.6971 | 99.0 | 12375 | 2.9537 |
| 0.7379 | 100.0 | 12500 | 2.9592 |
| 0.7207 | 101.0 | 12625 | 2.9905 |
| 0.7012 | 102.0 | 12750 | 3.0638 |
| 0.6768 | 103.0 | 12875 | 3.0401 |
| 0.6777 | 104.0 | 13000 | 3.0396 |
| 0.6913 | 105.0 | 13125 | 2.9501 |
| 0.6654 | 106.0 | 13250 | 3.1079 |
| 0.6393 | 107.0 | 13375 | 3.0405 |
| 0.6465 | 108.0 | 13500 | 3.1579 |
| 0.6379 | 109.0 | 13625 | 3.1174 |
| 0.6662 | 110.0 | 13750 | 3.0548 |
| 0.657 | 111.0 | 13875 | 3.0542 |
| 0.6193 | 112.0 | 14000 | 3.0411 |
| 0.5961 | 113.0 | 14125 | 3.0915 |
| 0.6438 | 114.0 | 14250 | 3.0924 |
| 0.5815 | 115.0 | 14375 | 3.0309 |
| 0.6113 | 116.0 | 14500 | 3.0300 |
| 0.635 | 117.0 | 14625 | 2.9968 |
| 0.5875 | 118.0 | 14750 | 2.9821 |
| 0.6447 | 119.0 | 14875 | 3.0285 |
| 0.5632 | 120.0 | 15000 | 3.0684 |
| 0.6206 | 121.0 | 15125 | 3.0065 |
| 0.5929 | 122.0 | 15250 | 3.0490 |
| 0.5509 | 123.0 | 15375 | 3.0520 |
| 0.6068 | 124.0 | 15500 | 3.0957 |
| 0.5747 | 125.0 | 15625 | 2.9621 |
| 0.5844 | 126.0 | 15750 | 3.0194 |
| 0.5684 | 127.0 | 15875 | 3.1037 |
| 0.6356 | 128.0 | 16000 | 3.0752 |
| 0.5644 | 129.0 | 16125 | 3.0063 |
| 0.6303 | 130.0 | 16250 | 3.0204 |
| 0.5432 | 131.0 | 16375 | 3.0809 |
| 0.6153 | 132.0 | 16500 | 3.1015 |
| 0.5662 | 133.0 | 16625 | 3.0639 |
| 0.5704 | 134.0 | 16750 | 3.1974 |
| 0.603 | 135.0 | 16875 | 3.1371 |
| 0.526 | 136.0 | 17000 | 3.1381 |
| 0.5767 | 137.0 | 17125 | 3.1614 |
| 0.5591 | 138.0 | 17250 | 3.2744 |
| 0.5609 | 139.0 | 17375 | 3.0405 |
| 0.5229 | 140.0 | 17500 | 3.0773 |
| 0.5367 | 141.0 | 17625 | 3.1367 |
| 0.5719 | 142.0 | 17750 | 3.1770 |
| 0.5172 | 143.0 | 17875 | 3.0953 |
| 0.5592 | 144.0 | 18000 | 3.2524 |
| 0.5422 | 145.0 | 18125 | 3.1534 |
| 0.5007 | 146.0 | 18250 | 3.1571 |
| 0.5348 | 147.0 | 18375 | 3.0949 |
| 0.5123 | 148.0 | 18500 | 3.1381 |
| 0.4839 | 149.0 | 18625 | 3.1624 |
| 0.5207 | 150.0 | 18750 | 3.0585 |
| 0.5236 | 151.0 | 18875 | 3.0886 |
| 0.5144 | 152.0 | 19000 | 3.1348 |
| 0.4882 | 153.0 | 19125 | 3.1027 |
| 0.4618 | 154.0 | 19250 | 3.1335 |
| 0.4573 | 155.0 | 19375 | 3.1687 |
| 0.4956 | 156.0 | 19500 | 3.2155 |
| 0.5073 | 157.0 | 19625 | 3.2512 |
| 0.5334 | 158.0 | 19750 | 3.2262 |
| 0.5014 | 159.0 | 19875 | 3.2350 |
| 0.4519 | 160.0 | 20000 | 3.2416 |
| 0.5042 | 161.0 | 20125 | 3.1955 |
| 0.4624 | 162.0 | 20250 | 3.2036 |
| 0.4577 | 163.0 | 20375 | 3.2498 |
| 0.5032 | 164.0 | 20500 | 3.1687 |
| 0.4894 | 165.0 | 20625 | 3.1920 |
| 0.4621 | 166.0 | 20750 | 3.2275 |
| 0.4896 | 167.0 | 20875 | 3.1416 |
| 0.4998 | 168.0 | 21000 | 3.1483 |
| 0.4941 | 169.0 | 21125 | 3.1408 |
| 0.4307 | 170.0 | 21250 | 3.2056 |
| 0.4284 | 171.0 | 21375 | 3.2112 |
| 0.4431 | 172.0 | 21500 | 3.1926 |
| 0.4429 | 173.0 | 21625 | 3.0972 |
| 0.4832 | 174.0 | 21750 | 3.2309 |
| 0.4417 | 175.0 | 21875 | 3.2027 |
| 0.4022 | 176.0 | 22000 | 3.2380 |
| 0.4777 | 177.0 | 22125 | 3.3315 |
| 0.462 | 178.0 | 22250 | 3.1854 |
| 0.4209 | 179.0 | 22375 | 3.1563 |
| 0.4271 | 180.0 | 22500 | 3.3036 |
| 0.4359 | 181.0 | 22625 | 3.3058 |
| 0.4324 | 182.0 | 22750 | 3.3639 |
| 0.4252 | 183.0 | 22875 | 3.2810 |
| 0.4382 | 184.0 | 23000 | 3.4633 |
| 0.4344 | 185.0 | 23125 | 3.2875 |
| 0.4639 | 186.0 | 23250 | 3.2771 |
| 0.4104 | 187.0 | 23375 | 3.2768 |
| 0.437 | 188.0 | 23500 | 3.3128 |
| 0.4469 | 189.0 | 23625 | 3.2389 |
| 0.4084 | 190.0 | 23750 | 3.4082 |
| 0.4333 | 191.0 | 23875 | 3.3177 |
| 0.4337 | 192.0 | 24000 | 3.3474 |
| 0.4173 | 193.0 | 24125 | 3.3495 |
| 0.386 | 194.0 | 24250 | 3.3413 |
| 0.4279 | 195.0 | 24375 | 3.3327 |
| 0.4046 | 196.0 | 24500 | 3.3383 |
| 0.4163 | 197.0 | 24625 | 3.3235 |
| 0.4032 | 198.0 | 24750 | 3.3549 |
| 0.381 | 199.0 | 24875 | 3.2899 |
| 0.3858 | 200.0 | 25000 | 3.3752 |
| 0.4085 | 201.0 | 25125 | 3.2569 |
| 0.3643 | 202.0 | 25250 | 3.3265 |
| 0.3621 | 203.0 | 25375 | 3.3730 |
| 0.4749 | 204.0 | 25500 | 3.3738 |
| 0.3969 | 205.0 | 25625 | 3.3619 |
| 0.3677 | 206.0 | 25750 | 3.4378 |
| 0.3838 | 207.0 | 25875 | 3.3412 |
| 0.4063 | 208.0 | 26000 | 3.3268 |
| 0.3719 | 209.0 | 26125 | 3.4574 |
| 0.3803 | 210.0 | 26250 | 3.3598 |
| 0.4093 | 211.0 | 26375 | 3.3738 |
| 0.38 | 212.0 | 26500 | 3.2644 |
| 0.3757 | 213.0 | 26625 | 3.3872 |
| 0.4116 | 214.0 | 26750 | 3.4318 |
| 0.3741 | 215.0 | 26875 | 3.2945 |
| 0.3809 | 216.0 | 27000 | 3.4419 |
| 0.3625 | 217.0 | 27125 | 3.4126 |
| 0.3772 | 218.0 | 27250 | 3.3693 |
| 0.3494 | 219.0 | 27375 | 3.3014 |
| 0.3521 | 220.0 | 27500 | 3.4202 |
| 0.3498 | 221.0 | 27625 | 3.2887 |
| 0.3716 | 222.0 | 27750 | 3.5634 |
| 0.346 | 223.0 | 27875 | 3.3463 |
| 0.388 | 224.0 | 28000 | 3.4088 |
| 0.3708 | 225.0 | 28125 | 3.3841 |
| 0.3964 | 226.0 | 28250 | 3.3839 |
| 0.3897 | 227.0 | 28375 | 3.4874 |
| 0.3272 | 228.0 | 28500 | 3.5225 |
| 0.3582 | 229.0 | 28625 | 3.4964 |
| 0.3656 | 230.0 | 28750 | 3.3781 |
| 0.3497 | 231.0 | 28875 | 3.4067 |
| 0.3318 | 232.0 | 29000 | 3.4918 |
| 0.3565 | 233.0 | 29125 | 3.5039 |
| 0.3865 | 234.0 | 29250 | 3.5416 |
| 0.3583 | 235.0 | 29375 | 3.4231 |
| 0.3464 | 236.0 | 29500 | 3.4524 |
| 0.3465 | 237.0 | 29625 | 3.4779 |
| 0.3428 | 238.0 | 29750 | 3.4889 |
| 0.3847 | 239.0 | 29875 | 3.5142 |
| 0.3505 | 240.0 | 30000 | 3.5132 |
| 0.344 | 241.0 | 30125 | 3.5439 |
| 0.3741 | 242.0 | 30250 | 3.4861 |
| 0.3045 | 243.0 | 30375 | 3.4534 |
| 0.3443 | 244.0 | 30500 | 3.4675 |
| 0.3719 | 245.0 | 30625 | 3.4354 |
| 0.3534 | 246.0 | 30750 | 3.4817 |
| 0.3644 | 247.0 | 30875 | 3.5027 |
| 0.3157 | 248.0 | 31000 | 3.5055 |
| 0.3393 | 249.0 | 31125 | 3.3962 |
| 0.3054 | 250.0 | 31250 | 3.4470 |
| 0.3434 | 251.0 | 31375 | 3.5036 |
| 0.3141 | 252.0 | 31500 | 3.5428 |
| 0.3227 | 253.0 | 31625 | 3.5025 |
| 0.3199 | 254.0 | 31750 | 3.5110 |
| 0.3667 | 255.0 | 31875 | 3.5168 |
| 0.3442 | 256.0 | 32000 | 3.5739 |
| 0.366 | 257.0 | 32125 | 3.5094 |
| 0.3053 | 258.0 | 32250 | 3.4360 |
| 0.3595 | 259.0 | 32375 | 3.5895 |
| 0.3329 | 260.0 | 32500 | 3.5869 |
| 0.3139 | 261.0 | 32625 | 3.5317 |
| 0.299 | 262.0 | 32750 | 3.6398 |
| 0.3131 | 263.0 | 32875 | 3.5696 |
| 0.3218 | 264.0 | 33000 | 3.5519 |
| 0.3677 | 265.0 | 33125 | 3.6712 |
| 0.3447 | 266.0 | 33250 | 3.5278 |
| 0.3094 | 267.0 | 33375 | 3.5613 |
| 0.3031 | 268.0 | 33500 | 3.4634 |
| 0.3234 | 269.0 | 33625 | 3.5966 |
| 0.3489 | 270.0 | 33750 | 3.5239 |
| 0.3168 | 271.0 | 33875 | 3.6847 |
| 0.3151 | 272.0 | 34000 | 3.5559 |
| 0.2843 | 273.0 | 34125 | 3.5995 |
| 0.3003 | 274.0 | 34250 | 3.6388 |
| 0.3154 | 275.0 | 34375 | 3.6759 |
| 0.3178 | 276.0 | 34500 | 3.5199 |
| 0.3436 | 277.0 | 34625 | 3.5651 |
| 0.3136 | 278.0 | 34750 | 3.5722 |
| 0.3252 | 279.0 | 34875 | 3.4851 |
| 0.3404 | 280.0 | 35000 | 3.6847 |
| 0.304 | 281.0 | 35125 | 3.5653 |
| 0.3395 | 282.0 | 35250 | 3.6775 |
| 0.3431 | 283.0 | 35375 | 3.5556 |
| 0.2861 | 284.0 | 35500 | 3.6451 |
| 0.3066 | 285.0 | 35625 | 3.6052 |
| 0.3151 | 286.0 | 35750 | 3.6406 |
| 0.3143 | 287.0 | 35875 | 3.6744 |
| 0.2873 | 288.0 | 36000 | 3.6218 |
| 0.3296 | 289.0 | 36125 | 3.4993 |
| 0.3024 | 290.0 | 36250 | 3.5596 |
| 0.3138 | 291.0 | 36375 | 3.5875 |
| 0.2964 | 292.0 | 36500 | 3.5953 |
| 0.2935 | 293.0 | 36625 | 3.5550 |
| 0.2856 | 294.0 | 36750 | 3.5805 |
| 0.2998 | 295.0 | 36875 | 3.6153 |
| 0.2809 | 296.0 | 37000 | 3.6440 |
| 0.3197 | 297.0 | 37125 | 3.6127 |
| 0.2863 | 298.0 | 37250 | 3.6362 |
| 0.3355 | 299.0 | 37375 | 3.7291 |
| 0.2942 | 300.0 | 37500 | 3.6750 |
| 0.3187 | 301.0 | 37625 | 3.6617 |
| 0.3191 | 302.0 | 37750 | 3.6898 |
| 0.2716 | 303.0 | 37875 | 3.6238 |
| 0.2911 | 304.0 | 38000 | 3.6409 |
| 0.3231 | 305.0 | 38125 | 3.6807 |
| 0.2723 | 306.0 | 38250 | 3.7038 |
| 0.2812 | 307.0 | 38375 | 3.6565 |
| 0.3225 | 308.0 | 38500 | 3.6680 |
| 0.2803 | 309.0 | 38625 | 3.7389 |
| 0.2852 | 310.0 | 38750 | 3.7257 |
| 0.2958 | 311.0 | 38875 | 3.7873 |
| 0.3191 | 312.0 | 39000 | 3.8238 |
| 0.2815 | 313.0 | 39125 | 3.7388 |
| 0.2681 | 314.0 | 39250 | 3.7543 |
| 0.3259 | 315.0 | 39375 | 3.8022 |
| 0.2804 | 316.0 | 39500 | 3.7106 |
| 0.3214 | 317.0 | 39625 | 3.6244 |
| 0.2732 | 318.0 | 39750 | 3.7712 |
| 0.3054 | 319.0 | 39875 | 3.6776 |
| 0.263 | 320.0 | 40000 | 3.6850 |
| 0.2644 | 321.0 | 40125 | 3.7169 |
| 0.2796 | 322.0 | 40250 | 3.7958 |
| 0.2928 | 323.0 | 40375 | 3.7770 |
| 0.2774 | 324.0 | 40500 | 3.7142 |
| 0.3048 | 325.0 | 40625 | 3.7942 |
| 0.2637 | 326.0 | 40750 | 3.7499 |
| 0.2549 | 327.0 | 40875 | 3.7323 |
| 0.2681 | 328.0 | 41000 | 3.8373 |
| 0.2735 | 329.0 | 41125 | 3.7600 |
| 0.291 | 330.0 | 41250 | 3.6715 |
| 0.278 | 331.0 | 41375 | 3.6660 |
| 0.2785 | 332.0 | 41500 | 3.7076 |
| 0.2632 | 333.0 | 41625 | 3.7408 |
| 0.2994 | 334.0 | 41750 | 3.7214 |
| 0.2563 | 335.0 | 41875 | 3.7326 |
| 0.2755 | 336.0 | 42000 | 3.7088 |
| 0.287 | 337.0 | 42125 | 3.7493 |
| 0.3144 | 338.0 | 42250 | 3.7180 |
| 0.2816 | 339.0 | 42375 | 3.7289 |
| 0.2515 | 340.0 | 42500 | 3.6592 |
| 0.2647 | 341.0 | 42625 | 3.6483 |
| 0.2833 | 342.0 | 42750 | 3.7359 |
| 0.2678 | 343.0 | 42875 | 3.7351 |
| 0.2929 | 344.0 | 43000 | 3.7129 |
| 0.2604 | 345.0 | 43125 | 3.7604 |
| 0.2406 | 346.0 | 43250 | 3.6867 |
| 0.2802 | 347.0 | 43375 | 3.6935 |
| 0.2702 | 348.0 | 43500 | 3.6744 |
| 0.2836 | 349.0 | 43625 | 3.7118 |
| 0.2454 | 350.0 | 43750 | 3.7354 |
| 0.238 | 351.0 | 43875 | 3.7200 |
| 0.2663 | 352.0 | 44000 | 3.7351 |
| 0.272 | 353.0 | 44125 | 3.7509 |
| 0.2422 | 354.0 | 44250 | 3.7413 |
| 0.3032 | 355.0 | 44375 | 3.6266 |
| 0.2617 | 356.0 | 44500 | 3.7229 |
| 0.2618 | 357.0 | 44625 | 3.7356 |
| 0.2619 | 358.0 | 44750 | 3.7178 |
| 0.2588 | 359.0 | 44875 | 3.7896 |
| 0.2508 | 360.0 | 45000 | 3.7686 |
| 0.2791 | 361.0 | 45125 | 3.7918 |
| 0.27 | 362.0 | 45250 | 3.7870 |
| 0.2841 | 363.0 | 45375 | 3.7675 |
| 0.2776 | 364.0 | 45500 | 3.7090 |
| 0.2752 | 365.0 | 45625 | 3.6754 |
| 0.2727 | 366.0 | 45750 | 3.6542 |
| 0.2423 | 367.0 | 45875 | 3.6399 |
| 0.263 | 368.0 | 46000 | 3.6337 |
| 0.2562 | 369.0 | 46125 | 3.7343 |
| 0.2704 | 370.0 | 46250 | 3.6502 |
| 0.2604 | 371.0 | 46375 | 3.7519 |
| 0.2312 | 372.0 | 46500 | 3.7563 |
| 0.2761 | 373.0 | 46625 | 3.6497 |
| 0.265 | 374.0 | 46750 | 3.7312 |
| 0.2545 | 375.0 | 46875 | 3.7273 |
| 0.2551 | 376.0 | 47000 | 3.8266 |
| 0.2604 | 377.0 | 47125 | 3.7659 |
| 0.252 | 378.0 | 47250 | 3.8003 |
| 0.2468 | 379.0 | 47375 | 3.7535 |
| 0.289 | 380.0 | 47500 | 3.7376 |
| 0.2449 | 381.0 | 47625 | 3.7216 |
| 0.2471 | 382.0 | 47750 | 3.7704 |
| 0.2627 | 383.0 | 47875 | 3.7510 |
| 0.2454 | 384.0 | 48000 | 3.8007 |
| 0.2391 | 385.0 | 48125 | 3.7535 |
| 0.2452 | 386.0 | 48250 | 3.7905 |
| 0.2608 | 387.0 | 48375 | 3.8170 |
| 0.2662 | 388.0 | 48500 | 3.7836 |
| 0.235 | 389.0 | 48625 | 3.7754 |
| 0.2425 | 390.0 | 48750 | 3.7201 |
| 0.2463 | 391.0 | 48875 | 3.8298 |
| 0.2507 | 392.0 | 49000 | 3.8252 |
| 0.2451 | 393.0 | 49125 | 3.7625 |
| 0.2418 | 394.0 | 49250 | 3.7482 |
| 0.2397 | 395.0 | 49375 | 3.7875 |
| 0.2773 | 396.0 | 49500 | 3.8159 |
| 0.256 | 397.0 | 49625 | 3.8322 |
| 0.2304 | 398.0 | 49750 | 3.8012 |
| 0.2363 | 399.0 | 49875 | 3.8033 |
| 0.2575 | 400.0 | 50000 | 3.7727 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
pkarypis/opt-125m-sft
|
pkarypis
| 2024-01-04T03:16:01Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:facebook/opt-125m",
"base_model:finetune:facebook/opt-125m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T02:40:37Z |
---
license: other
base_model: facebook/opt-125m
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: opt-125m-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m-sft
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5574 | 1.0 | 253 | 1.9307 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.0
|
Lit4pCol4b/mit-b1_segformer_ADE20k_RGB_IS_v1
|
Lit4pCol4b
| 2024-01-04T03:07:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"base_model:nvidia/mit-b1",
"base_model:finetune:nvidia/mit-b1",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-01-04T01:07:46Z |
---
license: other
base_model: nvidia/mit-b1
tags:
- generated_from_keras_callback
model-index:
- name: Lit4pCol4b/mit-b1_segformer_ADE20k_RGB_IS_v1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Lit4pCol4b/mit-b1_segformer_ADE20k_RGB_IS_v1
This model is a fine-tuned version of [nvidia/mit-b1](https://huggingface.co/nvidia/mit-b1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0787
- Validation Loss: 0.1007
- Validation Mean Iou: 0.7646
- Validation Mean Accuracy: 0.8701
- Validation Overall Accuracy: 0.9687
- Validation Accuracy Unlabeled: 0.6791
- Validation Accuracy Objeto Interes: 0.9475
- Validation Accuracy Agua: 0.9838
- Validation Iou Unlabeled: 0.5173
- Validation Iou Objeto Interes: 0.8005
- Validation Iou Agua: 0.9760
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 6e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Accuracy Unlabeled | Validation Accuracy Objeto Interes | Validation Accuracy Agua | Validation Iou Unlabeled | Validation Iou Objeto Interes | Validation Iou Agua | Epoch |
|:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:-----------------------------:|:----------------------------------:|:------------------------:|:------------------------:|:-----------------------------:|:-------------------:|:-----:|
| 0.6580 | 0.6718 | 0.4196 | 0.6281 | 0.8767 | 0.0254 | 0.9466 | 0.9124 | 0.0232 | 0.3345 | 0.9012 | 0 |
| 0.4551 | 0.5040 | 0.5131 | 0.6832 | 0.9126 | 0.1813 | 0.9216 | 0.9467 | 0.1234 | 0.4837 | 0.9322 | 1 |
| 0.3472 | 0.2565 | 0.5381 | 0.6560 | 0.9375 | 0.1035 | 0.8839 | 0.9805 | 0.0930 | 0.5671 | 0.9542 | 2 |
| 0.2846 | 0.2434 | 0.6188 | 0.7343 | 0.9442 | 0.3415 | 0.8847 | 0.9767 | 0.2514 | 0.6486 | 0.9564 | 3 |
| 0.2383 | 0.2245 | 0.6401 | 0.7568 | 0.9469 | 0.4203 | 0.8735 | 0.9767 | 0.2975 | 0.6644 | 0.9586 | 4 |
| 0.2075 | 0.2243 | 0.6606 | 0.7809 | 0.9501 | 0.4690 | 0.8975 | 0.9764 | 0.3332 | 0.6879 | 0.9608 | 5 |
| 0.1943 | 0.1820 | 0.6721 | 0.7704 | 0.9559 | 0.4301 | 0.8964 | 0.9847 | 0.3423 | 0.7083 | 0.9658 | 6 |
| 0.1835 | 0.2237 | 0.6866 | 0.8243 | 0.9510 | 0.5945 | 0.9077 | 0.9707 | 0.3844 | 0.7151 | 0.9601 | 7 |
| 0.1645 | 0.1638 | 0.7110 | 0.8204 | 0.9586 | 0.6026 | 0.8779 | 0.9808 | 0.4292 | 0.7369 | 0.9670 | 8 |
| 0.1574 | 0.1359 | 0.7140 | 0.8058 | 0.9616 | 0.5380 | 0.8933 | 0.9861 | 0.4197 | 0.7527 | 0.9695 | 9 |
| 0.1737 | 0.1421 | 0.7075 | 0.8042 | 0.9596 | 0.5320 | 0.8965 | 0.9841 | 0.4037 | 0.7513 | 0.9675 | 10 |
| 0.1608 | 0.1613 | 0.7046 | 0.8348 | 0.9564 | 0.6010 | 0.9285 | 0.9750 | 0.4156 | 0.7325 | 0.9655 | 11 |
| 0.1425 | 0.1387 | 0.7268 | 0.8355 | 0.9618 | 0.6140 | 0.9109 | 0.9816 | 0.4499 | 0.7605 | 0.9698 | 12 |
| 0.1299 | 0.1230 | 0.7198 | 0.8184 | 0.9628 | 0.5475 | 0.9225 | 0.9851 | 0.4286 | 0.7595 | 0.9714 | 13 |
| 0.1286 | 0.1279 | 0.7267 | 0.8320 | 0.9630 | 0.5856 | 0.9270 | 0.9833 | 0.4473 | 0.7614 | 0.9715 | 14 |
| 0.1322 | 0.1201 | 0.7428 | 0.8380 | 0.9651 | 0.6334 | 0.8954 | 0.9854 | 0.4772 | 0.7791 | 0.9722 | 15 |
| 0.1203 | 0.1076 | 0.7439 | 0.8294 | 0.9663 | 0.6001 | 0.9000 | 0.9880 | 0.4712 | 0.7872 | 0.9732 | 16 |
| 0.1154 | 0.1314 | 0.7417 | 0.8557 | 0.9633 | 0.6671 | 0.9198 | 0.9802 | 0.4752 | 0.7794 | 0.9706 | 17 |
| 0.1145 | 0.1098 | 0.7446 | 0.8438 | 0.9662 | 0.6183 | 0.9281 | 0.9852 | 0.4827 | 0.7770 | 0.9739 | 18 |
| 0.1131 | 0.0994 | 0.7500 | 0.8368 | 0.9676 | 0.6077 | 0.9145 | 0.9881 | 0.4834 | 0.7919 | 0.9748 | 19 |
| 0.1101 | 0.1157 | 0.7590 | 0.8657 | 0.9664 | 0.7130 | 0.9015 | 0.9827 | 0.5107 | 0.7928 | 0.9733 | 20 |
| 0.1045 | 0.1099 | 0.7513 | 0.8565 | 0.9664 | 0.6570 | 0.9288 | 0.9835 | 0.4959 | 0.7841 | 0.9739 | 21 |
| 0.1031 | 0.1045 | 0.7511 | 0.8522 | 0.9668 | 0.6398 | 0.9323 | 0.9846 | 0.4911 | 0.7878 | 0.9743 | 22 |
| 0.1038 | 0.1245 | 0.7335 | 0.8535 | 0.9628 | 0.6322 | 0.9488 | 0.9794 | 0.4609 | 0.7683 | 0.9713 | 23 |
| 0.0989 | 0.1130 | 0.7476 | 0.8608 | 0.9652 | 0.6641 | 0.9372 | 0.9813 | 0.4895 | 0.7805 | 0.9729 | 24 |
| 0.0961 | 0.0993 | 0.7534 | 0.8560 | 0.9672 | 0.6481 | 0.9356 | 0.9844 | 0.4949 | 0.7904 | 0.9748 | 25 |
| 0.0931 | 0.0977 | 0.7616 | 0.8574 | 0.9684 | 0.6623 | 0.9242 | 0.9858 | 0.5099 | 0.7995 | 0.9754 | 26 |
| 0.0913 | 0.0899 | 0.7685 | 0.8547 | 0.9701 | 0.6575 | 0.9184 | 0.9883 | 0.5192 | 0.8096 | 0.9768 | 27 |
| 0.0899 | 0.0984 | 0.7572 | 0.8550 | 0.9683 | 0.6393 | 0.9398 | 0.9858 | 0.5015 | 0.7940 | 0.9759 | 28 |
| 0.0918 | 0.1307 | 0.7440 | 0.8719 | 0.9635 | 0.6838 | 0.9545 | 0.9773 | 0.4872 | 0.7735 | 0.9713 | 29 |
| 0.0919 | 0.1239 | 0.7405 | 0.8590 | 0.9641 | 0.6442 | 0.9526 | 0.9801 | 0.4707 | 0.7784 | 0.9725 | 30 |
| 0.0925 | 0.0990 | 0.7699 | 0.8629 | 0.9696 | 0.6859 | 0.9163 | 0.9865 | 0.5271 | 0.8067 | 0.9761 | 31 |
| 0.0889 | 0.1069 | 0.7563 | 0.8708 | 0.9664 | 0.6864 | 0.9450 | 0.9811 | 0.5038 | 0.7913 | 0.9738 | 32 |
| 0.0836 | 0.0913 | 0.7707 | 0.8617 | 0.9702 | 0.6714 | 0.9265 | 0.9873 | 0.5265 | 0.8086 | 0.9770 | 33 |
| 0.0822 | 0.1041 | 0.7645 | 0.8788 | 0.9672 | 0.7170 | 0.9383 | 0.9809 | 0.5161 | 0.8035 | 0.9740 | 34 |
| 0.0803 | 0.0981 | 0.7699 | 0.8721 | 0.9691 | 0.6987 | 0.9334 | 0.9843 | 0.5291 | 0.8046 | 0.9759 | 35 |
| 0.0800 | 0.1018 | 0.7597 | 0.8681 | 0.9678 | 0.6728 | 0.9485 | 0.9830 | 0.5104 | 0.7935 | 0.9752 | 36 |
| 0.0779 | 0.0975 | 0.7727 | 0.8769 | 0.9692 | 0.7185 | 0.9286 | 0.9837 | 0.5349 | 0.8075 | 0.9757 | 37 |
| 0.0756 | 0.0984 | 0.7697 | 0.8742 | 0.9691 | 0.7003 | 0.9385 | 0.9838 | 0.5280 | 0.8051 | 0.9760 | 38 |
| 0.0787 | 0.1007 | 0.7646 | 0.8701 | 0.9687 | 0.6791 | 0.9475 | 0.9838 | 0.5173 | 0.8005 | 0.9760 | 39 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sambar/sambar-7b-dpo-lora
|
sambar
| 2024-01-04T03:06:55Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T17:52:03Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: sambar-7b-dpo-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sambar-7b-dpo-lora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5747
- Rewards/chosen: -0.0141
- Rewards/rejected: -0.4147
- Rewards/accuracies: 0.7060
- Rewards/margins: 0.4006
- Logps/rejected: -221.3069
- Logps/chosen: -263.0773
- Logits/rejected: -2.1478
- Logits/chosen: -2.2594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6213 | 1.0 | 242 | 0.6182 | 0.0426 | -0.1569 | 0.6860 | 0.1995 | -218.7293 | -262.5110 | -2.1605 | -2.2727 |
| 0.5903 | 2.0 | 484 | 0.5826 | 0.0046 | -0.3500 | 0.6940 | 0.3546 | -220.6603 | -262.8906 | -2.1517 | -2.2634 |
| 0.5743 | 3.0 | 726 | 0.5747 | -0.0141 | -0.4147 | 0.7060 | 0.4006 | -221.3069 | -263.0773 | -2.1478 | -2.2594 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
detakarang/tinysql
|
detakarang
| 2024-01-04T02:53:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:adapter:unsloth/tinyllama-bnb-4bit",
"region:us"
] | null | 2024-01-04T02:52:51Z |
---
library_name: peft
base_model: unsloth/tinyllama-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
jeiku/Cognizant_3B
|
jeiku
| 2024-01-04T02:52:28Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"custom_code",
"en",
"dataset:yahma/alpaca-cleaned",
"dataset:Xilabs/PIPPA-alpaca",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-03T23:53:31Z |
---
license: other
datasets:
- yahma/alpaca-cleaned
- Xilabs/PIPPA-alpaca
language:
- en
---
This is a refined version of my Rosa line, utilizing the same datasets as Foundation, but with a more sane approach to the creation of the LoRAs. This should be an improvement in every way.
The base of this model is Rosa v1, which should provide a more coherent end product than v3 after some extensive testing. It is my hope that this model will serve well as a character driven personal assistant/companion for users with low end hardware.
Feel free to leave commentary in the Community tab. I am always looking for new ways to improve my work, but for now, I consider this the realization of the goals for this project.
|
rasyosef/bert_uncased_fake_news
|
rasyosef
| 2024-01-04T02:42:54Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-17T00:42:57Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: bert_uncased_fake_news
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert_uncased_fake_news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the kaggle fake news detection english [dataset](https://www.kaggle.com/datasets/sadikaljarif/fake-news-detection-dataset-english).
It achieves the following results on the evaluation set:
- Train Loss: 0.0015
- Train Accuracy: 0.9997
- Validation Loss: 0.0048
- Validation Accuracy: 0.9983
- Test F1 Score (macro): 0.9989
## How to use
You can use this model directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="rasyosef/bert_uncased_fake_news")
>>> classifier(["Wow! Talk about clueless! Austen Fletcher approaches anti-Trump protesters and gets clueless answers on why they re against Trump:Thought you might enjoy this @PrisonPlanet @allidoisowen @JackPosobiec pic.twitter.com/kdYm2WlfdB austen fletcher (@fleccas) July 17, 2017"])
[{'label': 'Fake News', 'score': 0.9999557733535767}]
```
## Model description
More information needed
## Intended uses & limitations
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 2814, 'end_learning_rate': 0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
blotus51515/ppo-LunarLander-v2
|
blotus51515
| 2024-01-04T02:38:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-03T21:14:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.10 +/- 15.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
slapula/commonvoice_be_tts_female_1
|
slapula
| 2024-01-04T02:32:57Z | 4 | 0 |
transformers
|
[
"transformers",
"be",
"dataset:mozilla-foundation/common_voice_16_0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-12-29T01:09:12Z |
---
license: mit
language:
- be
datasets:
- mozilla-foundation/common_voice_16_0
---
# GlowTTS + HifiGAN Female Belarusian Voice #1
This is my first attempt at training a Belarusian voice using [Coqui TTS](https://docs.coqui.ai/en/dev/index.html) and Mozilla's [CommonVoice](https://commonvoice.mozilla.org/en) dataset. This model was developed based on the [excellent recipe](https://github.com/coqui-ai/TTS/tree/dev/recipes/bel-alex73) provided by bel-alex73. For this particular model, I tweaked the search results to find single speakers with over 30 hours of audio and selected speakers based on clarity and relatively slow speaking cadence. This was a manual selection process that involved me tweaking bel-alex73 `choose_speaker.ipynb` notebook to show/process more that just the top ranked speaker.
This model is generated from the following client_id: 216de8fc1b7973a11926dd6694d2a97c3ceaf5a626ec4c8d2c85c8140a10ec5ed59bd6ee756c8c3451ee0cf784e4af445748cd69a2936102489b95f3409cd0d7
I am not a native speaker of Belarusian and I am doing this to assist in my language learning efforts. I am open to any and all feedback (esp. from native speakers) so feel free to post questions/comments.
## Sythesizing text to speech
Input text needs to be phoneme-ized in order for this model to process the speech correctly. This process has been documented in [bel-alex73's README](https://github.com/coqui-ai/TTS/tree/dev/recipes/bel-alex73#prepare-to-training---locally).
```
tts --text "<phonemes>" --out_path output.wav \
--config_path config.json \
--model_path best_model.pth \
--vocoder_config_path vocoder_config.json \
--vocoder_path vocoder_best_model.pth
```
|
NotoriousH2/test_solar_10.7b_v1.0
|
NotoriousH2
| 2024-01-04T02:16:23Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:upstage/SOLAR-10.7B-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-v1.0",
"region:us"
] | null | 2024-01-04T02:15:46Z |
---
library_name: peft
base_model: upstage/SOLAR-10.7B-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
zhenchuan/distilbert-base-uncased-finetuned-imdb
|
zhenchuan
| 2024-01-04T02:12:08Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-04T02:08:55Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6819 | 1.0 | 157 | 2.4978 |
| 2.5872 | 2.0 | 314 | 2.4488 |
| 2.525 | 3.0 | 471 | 2.4836 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
alex2awesome/stance-detection-classification-model
|
alex2awesome
| 2024-01-04T01:52:06Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T01:36:50Z |
---
license: apache-2.0
language:
- en
---
Stance detection model distilled from a news dataset label by a larger model. The larger model was trained on a combination of stance datasets in the literature:
FNC-1 (Pomerleau and Rao, 2017), Perspectrum (Chen et al., 2019), ARC (Habernal et al., 2017), Emergent (Ferreira and Vlachos, 2016) NewsClaims (Reddy et al., 2021)7.
Acheives this score on distilled data. Hand-checked data to come, but similar models with similar scores achieved a 0.57.
```{
"eval_samples": 6482,
"test_f1": 0.43183643890513856,
"test_loss": 2.9971516132354736,
"test_runtime": 46.1838,
"test_samples_per_second": 140.352,
"test_steps_per_second": 8.791
}```
|
IParraMartin/XLM-AgloBERTa-fi-ner
|
IParraMartin
| 2024-01-04T01:43:02Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-04T01:42:12Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLM-AgloBERTa-fi-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-AgloBERTa-fi-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2917
- Precision: 0.8818
- Recall: 0.8970
- F1: 0.8893
- Accuracy: 0.9578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2709 | 1.0 | 1250 | 0.2630 | 0.8052 | 0.8239 | 0.8145 | 0.9296 |
| 0.1861 | 2.0 | 2500 | 0.2313 | 0.8182 | 0.8543 | 0.8358 | 0.9357 |
| 0.1341 | 3.0 | 3750 | 0.1858 | 0.8599 | 0.8644 | 0.8621 | 0.9492 |
| 0.1035 | 4.0 | 5000 | 0.2108 | 0.8705 | 0.8738 | 0.8721 | 0.9520 |
| 0.0784 | 5.0 | 6250 | 0.2073 | 0.8736 | 0.8779 | 0.8757 | 0.9538 |
| 0.0572 | 6.0 | 7500 | 0.2226 | 0.8758 | 0.8828 | 0.8793 | 0.9549 |
| 0.0358 | 7.0 | 8750 | 0.2514 | 0.8763 | 0.8885 | 0.8824 | 0.9566 |
| 0.0226 | 8.0 | 10000 | 0.2522 | 0.8792 | 0.8922 | 0.8857 | 0.9563 |
| 0.0151 | 9.0 | 11250 | 0.2836 | 0.8795 | 0.8949 | 0.8871 | 0.9576 |
| 0.0079 | 10.0 | 12500 | 0.2917 | 0.8818 | 0.8970 | 0.8893 | 0.9578 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ericpolewski/AIRIC-The-Mistral-8.0bpw-exl2
|
ericpolewski
| 2024-01-04T01:32:49Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2024-01-04T01:30:19Z |
---
license: mit
---
This is a 8-bit exl2 quant of https://huggingface.co/ericpolewski/AIRIC-The-Mistral
5-bit available at https://huggingface.co/ericpolewski/AIRIC-The-Mistral-5.0bpw-exl2
|
Tachi67/ReplanningFlowModule
|
Tachi67
| 2024-01-04T01:31:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-12-07T09:51:52Z |
### Structure of ReplanningFlow
```
goal (info on the old plan), plan (old plan), plan_file_location
|
v
+---------------+
| Controller | --------<<<<-----------+
+---------------+ |
| |
| (command, command args) |
| |
v |
+------------------+ |
| Executor | Each branch is an |
| (Tree Structure) | executor |
+------------------+ |
| ^
| (summary) |
| |
v |
| |
+-> goes back to the Controller>-+
```
Structure of the Executors:
```
+-------------------+
| Branching |
| Executor |
+-------------------+
/ \
/ \
/ \
/ \
write_plan ask_user
```
About the branches:
- [ask_user](https://huggingface.co/Tachi67/PlanWriterFlowModule/blob/main/PlanWriterAskUserFlow.py): Ask user for info / confirmation, etc.
- [write_plan](https://huggingface.co/Tachi67/InteractivePlanGenFlowModule): Generates plan (user edit is allowed) and fetches user feedback.
- The PlanGenerator of write_plan is replaced with [NewPlanGenFlow](https://huggingface.co/Tachi67/ReplanningFlowModule/blob/main/NewPlanGenFlow.py) to re-plan instead of write plan.
How it works:
Controller calls write_plan until user is satisfied in the feedback, finish.
# Table of Contents
* [run\_replanning](#run_replanning)
* [ReplanningAskUserFlow](#ReplanningAskUserFlow)
* [ReplanningAskUserFlow](#ReplanningAskUserFlow.ReplanningAskUserFlow)
* [NewPlanGenFlow](#NewPlanGenFlow)
* [ReplanningFlow](#ReplanningFlow)
* [ReplanningFlow](#ReplanningFlow.ReplanningFlow)
* [\_\_init\_\_](#__init__)
<a id="run_replanning"></a>
# run\_replanning
<a id="ReplanningAskUserFlow"></a>
# ReplanningAskUserFlow
<a id="ReplanningAskUserFlow.ReplanningAskUserFlow"></a>
## ReplanningAskUserFlow Objects
```python
class ReplanningAskUserFlow(HumanStandardInputFlow)
```
Refer to: https://huggingface.co/Tachi67/ExtendLibraryFlowModule/blob/main/ExtLibAskUserFlow.py
<a id="NewPlanGenFlow"></a>
# NewPlanGenFlow
<a id="ReplanningFlow"></a>
# ReplanningFlow
<a id="ReplanningFlow.ReplanningFlow"></a>
## ReplanningFlow Objects
```python
class ReplanningFlow(PlanWriterFlow)
```
This flow inherits from PlanWriterFlow.
By changing prompts and injecting proper information to the controller and the PlanGenerator, we are able to achieve the replanning.
*Input Interface*:
- `goal` (str): information on the old plan (e.g. what is wrong)
- `plan` (str): the old plan
- `plan_file_location` (str): the location of the old plan file
*Output Interface*:
- `plan` (str): the new plan
- `status`: "finished" or "unfinished"
- `summary` (str): summary of the flow, will be written to the log file of the caller flow.
- `result` (str): result of the flow, will be passed to the controller of the caller flow.
<a id="__init__"></a>
# \_\_init\_\_
|
alex2awesome/stance-detection-t5
|
alex2awesome
| 2024-01-04T01:31:15Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-12T11:14:25Z |
---
license: apache-2.0
---
Stance detection model distilled from a news dataset label by a larger model. The larger model was trained on a combination of
stance datasets in the literature:
FNC-1 (Pomerleau and Rao, 2017),
Perspectrum (Chen et al., 2019),
ARC (Habernal et al., 2017),
Emergent (Ferreira and Vlachos, 2016)
NewsClaims (Reddy et al., 2021)7.
Achieves a .0.5712643678160919 f1-score on hand labeled indomain news data
To run:
```
from transformers import AutoTokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained('alex2awesome/stance-detection-t5')
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
tokenizer_with_prefix_space = AutoTokenizer.from_pretrained(model_name_or_path, add_prefix_space=True)
def get_tokens_as_tuple(word):
return tuple(tokenizer_with_prefix_space([word], add_special_tokens=False).input_ids[0])
input_ids = tokenizer(text, return_tensors="pt").input_ids
y_pred_gen_output = model.generate(
input_ids,
renormalize_logits=True,
sequence_bias= {
get_tokens_as_tuple('__Affirm__'): 0.143841,
get_tokens_as_tuple('__Discuss__'): -0.294732,
get_tokens_as_tuple('__Neutral__'): -0.103820,
get_tokens_as_tuple('__Refute__'): 0.637734,
},
)
You can tweak the class weights yourself if you want.
```
|
Tachi67/InteractiveCodeGenFlowModule
|
Tachi67
| 2024-01-04T01:28:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-11-30T12:22:23Z |
### Structure of InteractiveCodeGenFlow
```
goal
|
|
|
v
+------------------+
| MemReading | Reads in the memory in the memory files. (https://huggingface.co/Tachi67/MemoryReadingFlowModule)https://huggingface.co/Tachi67/MemoryReadingFlowModule
+------------------+
|
| (code_library: function signatures of the library)
|
v
+------------------+
| CodeGenerator | Generates code given goal, function signatures of the library. (https://huggingface.co/Tachi67/CodeGeneratorFlowModule)
+------------------+
|
| (code, language_of_code)
|
v
+-------------------+
| CodeFileEdit | Edit a temp code file with the code generated. (https://huggingface.co/Tachi67/CodeFileEditFlowModule)
+-------------------+
|
| (temp_code_file_location)
|
v
+------------------+
| ParseFeedback | Opens up the temp file until user closes the file, parse for user feedback. (https://huggingface.co/Tachi67/ParseFeedbackFlowModule)
+------------------+
|
| (code, feedback)
|
v
code, feedback
```
Memory_files:
- library.py
# Table of Contents
* [InteractiveCodeGenFlow](#InteractiveCodeGenFlow)
* [InteractiveCodeGenFlow](#InteractiveCodeGenFlow.InteractiveCodeGenFlow)
* [\_\_init\_\_](#__init__)
<a id="InteractiveCodeGenFlow"></a>
# InteractiveCodeGenFlow
<a id="InteractiveCodeGenFlow.InteractiveCodeGenFlow"></a>
## InteractiveCodeGenFlow Objects
```python
class InteractiveCodeGenFlow(SequentialFlow)
```
This flow writes code in an interactive manner. It is a sequential flow composed of:
1. MemoryReading: reads in the code library.
2. CodeGenerator: generates code based on the goal and functions in the code library.
3. CodeFileEditor: writes the generated code to a temp file for the user to see, edit and provide feedback.
4. ParseFeedback: opens up the temp file with vscode and parses the feedback from the user.
*Input Interface*:
- `goal`
*Output Interface*:
- `code`
- `feedback`
- `temp_code_file_location`
<a id="__init__"></a>
# \_\_init\_\_
|
Tachi67/InteractivePlanGenFlowModule
|
Tachi67
| 2024-01-04T01:28:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-12-06T12:42:21Z |
### Structure of InteractivePlanGenFlow
```
goal
|
|
|
v
+------------------+
| PlanGenerator | Generates plan given goal. (https://huggingface.co/Tachi67/PlanGeneratorFlowModule)
+------------------+
|
| (plan)
|
v
+-------------------+
| PlanFileEdit | Edit a temp file with the plan generated. (https://huggingface.co/Tachi67/PlanFileEditFlowModule)
+-------------------+
|
| (temp_plan_file_location)
|
v
+------------------+
| ParseFeedback | Opens up the temp file until user closes the file, parse for user feedback. (https://huggingface.co/Tachi67/ParseFeedbackFlowModule)
+------------------+
|
| (plan, feedback)
|
v
plan, feedback
```
# Table of Contents
* [InteractivePlanGenFlow](#InteractivePlanGenFlow)
* [InteractivePlanGenFlow](#InteractivePlanGenFlow.InteractivePlanGenFlow)
* [\_\_init\_\_](#__init__)
<a id="InteractivePlanGenFlow"></a>
# InteractivePlanGenFlow
<a id="InteractivePlanGenFlow.InteractivePlanGenFlow"></a>
## InteractivePlanGenFlow Objects
```python
class InteractivePlanGenFlow(SequentialFlow)
```
This flow writes code in an interactive manner. It is a sequential flow composed of:
1. PlanGenerator: generates plan based on the goal.
2. PlanFileEditor: writes the generated plan to a temp file for the user to see, edit and provide feedback.
3. ParseFeedback: opens up the temp file with vscode and parses the feedback from the user.
*Input Interface*:
- `goal`
- `plan_file_location`
*Output Interface*:
- `plan`
- `feedback`
- `temp_plan_file_location`
<a id="__init__"></a>
# \_\_init\_\_
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-indef-anan-seed_211-3e-4
|
kanishka
| 2024-01-04T01:28:15Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T03:00:50Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-indef-anan-seed_211-3e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-indef-anan-seed_211-3e-4
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3945
- Accuracy: 0.4087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 64
- seed: 211
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.7412 | 1.0 | 18595 | 3.9088 | 0.3451 |
| 3.4406 | 2.0 | 37190 | 3.6241 | 0.3755 |
| 3.2959 | 3.0 | 55785 | 3.5145 | 0.3874 |
| 3.2112 | 4.0 | 74380 | 3.4094 | 0.3965 |
| 3.1504 | 5.0 | 92975 | 3.3893 | 0.4001 |
| 3.1002 | 6.0 | 111570 | 3.3745 | 0.4019 |
| 3.0672 | 7.0 | 130165 | 3.3520 | 0.4040 |
| 3.0294 | 8.0 | 148760 | 3.3578 | 0.4047 |
| 3.0041 | 9.0 | 167355 | 3.3483 | 0.4062 |
| 2.9788 | 10.0 | 185950 | 3.3495 | 0.4069 |
| 2.9528 | 11.0 | 204545 | 3.3613 | 0.4070 |
| 2.9294 | 12.0 | 223140 | 3.3388 | 0.4082 |
| 2.9052 | 13.0 | 241735 | 3.3627 | 0.4080 |
| 2.8911 | 14.0 | 260330 | 3.3588 | 0.4082 |
| 2.8705 | 15.0 | 278925 | 3.3720 | 0.4085 |
| 2.8496 | 16.0 | 297520 | 3.3605 | 0.4094 |
| 2.8289 | 17.0 | 316115 | 3.3721 | 0.4089 |
| 2.8096 | 18.0 | 334710 | 3.3822 | 0.4088 |
| 2.7898 | 19.0 | 353305 | 3.3884 | 0.4088 |
| 2.772 | 20.0 | 371900 | 3.3945 | 0.4087 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.14.1
|
intMinsu/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-01040117
|
intMinsu
| 2024-01-04T01:19:45Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft",
"base_model:finetune:microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-03T16:17:42Z |
---
license: apache-2.0
base_model: microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft
tags:
- generated_from_trainer
model-index:
- name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-01040117
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to16-192to256-22kto1k-ft-01040117
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 777
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.