modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TLME/western-classification
|
TLME
| 2023-09-02T15:28:54Z | 0 | 0 | null |
[
"image-classification",
"license:mit",
"region:us"
] |
image-classification
| 2023-08-07T17:43:47Z |
---
license: mit
pipeline_tag: image-classification
---
A classification using mmpretrain trained to classify western images based on ConvNeXtV2-tiny.Used for classifying anime images based on whether they are in the Western style.
The evaluation accuracy on the validation set is 95%.
Trained using 7,000 Western images and 8,000 non-Western images, with the Western training set sampled from e-hentai.
Of course, this model also has many shortcomings, such as a very low recognition accuracy for line-drawing images.
Huggingface space:https://huggingface.co/spaces/TLME/western-anime-images-classification
# How to use
Python>=3.9
```
Install pytorch
pip install -r requirements.txt
edit infer.py , change "path = './testimg/'" to your target folder
python infer.py
```
|
btamm12/bert-base-uncased-finetuned-wls-manual-7ep-lower
|
btamm12
| 2023-09-02T15:28:50Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T15:26:48Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-wls-manual-7ep-lower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wls-manual-7ep-lower
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1113 | 0.93 | 7 | 1.9498 |
| 1.6005 | 2.0 | 15 | 1.5784 |
| 1.4812 | 2.93 | 22 | 1.4474 |
| 1.3854 | 4.0 | 30 | 1.4290 |
| 1.2898 | 4.93 | 37 | 1.2682 |
| 1.2785 | 6.0 | 45 | 1.2677 |
| 1.2535 | 6.53 | 49 | 1.3363 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
btamm12/bert-base-cased-finetuned-wls-manual-7ep
|
btamm12
| 2023-09-02T15:26:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T15:24:40Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-wls-manual-7ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wls-manual-7ep
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1707 | 0.93 | 7 | 1.9153 |
| 1.658 | 2.0 | 15 | 1.6462 |
| 1.5689 | 2.93 | 22 | 1.5263 |
| 1.4013 | 4.0 | 30 | 1.4385 |
| 1.3501 | 4.93 | 37 | 1.4224 |
| 1.293 | 6.0 | 45 | 1.3189 |
| 1.2473 | 6.53 | 49 | 1.2231 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
olivierhenaff/distilhubert-finetuned-gtzan
|
olivierhenaff
| 2023-09-02T15:22:12Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-02T12:11:45Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7428
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7684 | 1.0 | 225 | 1.6143 | 0.46 |
| 0.9707 | 2.0 | 450 | 1.0938 | 0.66 |
| 0.8819 | 3.0 | 675 | 0.7981 | 0.77 |
| 0.6527 | 4.0 | 900 | 0.6805 | 0.8 |
| 0.2499 | 5.0 | 1125 | 0.5896 | 0.81 |
| 0.0371 | 6.0 | 1350 | 0.8279 | 0.79 |
| 0.1651 | 7.0 | 1575 | 0.6830 | 0.81 |
| 0.011 | 8.0 | 1800 | 0.7673 | 0.81 |
| 0.0077 | 9.0 | 2025 | 0.7159 | 0.83 |
| 0.0068 | 10.0 | 2250 | 0.7428 | 0.83 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
crewdon/AICategoryMapping-multilingual-e5-small
|
crewdon
| 2023-09-02T15:20:57Z | 14 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-09-02T15:05:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# AICategoryMapping-multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 94 with parameters:
```
{'batch_size': 400}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 40,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 376,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
btamm12/bert-base-uncased-finetuned-wls-manual-6ep-lower
|
btamm12
| 2023-09-02T15:20:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T15:18:28Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-wls-manual-6ep-lower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wls-manual-6ep-lower
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1123 | 0.93 | 7 | 1.9531 |
| 1.6034 | 2.0 | 15 | 1.5832 |
| 1.489 | 2.93 | 22 | 1.4553 |
| 1.3975 | 4.0 | 30 | 1.4448 |
| 1.3074 | 4.93 | 37 | 1.2918 |
| 1.3083 | 5.6 | 42 | 1.4088 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
btamm12/bert-base-cased-finetuned-wls-manual-6ep
|
btamm12
| 2023-09-02T15:18:21Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T15:16:23Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-wls-manual-6ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wls-manual-6ep
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1598 | 0.93 | 7 | 1.8481 |
| 1.6257 | 2.0 | 15 | 1.6306 |
| 1.5537 | 2.93 | 22 | 1.5150 |
| 1.3943 | 4.0 | 30 | 1.4392 |
| 1.355 | 4.93 | 37 | 1.4389 |
| 1.3098 | 5.6 | 42 | 1.3518 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
btamm12/roberta-base-finetuned-wls-manual-5ep
|
btamm12
| 2023-09-02T15:16:16Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T15:14:07Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-wls-manual-5ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-wls-manual-5ep
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8234 | 0.93 | 7 | 1.5153 |
| 1.4411 | 2.0 | 15 | 1.3464 |
| 1.2972 | 2.93 | 22 | 1.3354 |
| 1.2674 | 4.0 | 30 | 1.2134 |
| 1.2753 | 4.67 | 35 | 1.3446 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Campqt/rl_course_vizdoom_health_gathering_supreme
|
Campqt
| 2023-09-02T15:14:58Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T15:14:52Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.72 +/- 4.36
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Campqt/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
The-matt/autumn-shadow-48_70
|
The-matt
| 2023-09-02T15:13:29Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T15:13:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
btamm12/bert-base-cased-finetuned-wls-manual-5ep
|
btamm12
| 2023-09-02T15:11:56Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T15:10:02Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-wls-manual-5ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wls-manual-5ep
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1603 | 0.93 | 7 | 1.8523 |
| 1.6398 | 2.0 | 15 | 1.6332 |
| 1.5675 | 2.93 | 22 | 1.5257 |
| 1.4167 | 4.0 | 30 | 1.4623 |
| 1.3885 | 4.67 | 35 | 1.4795 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
btamm12/bert-base-uncased-finetuned-wls-manual-4ep-lower
|
btamm12
| 2023-09-02T15:07:01Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T15:04:34Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-wls-manual-4ep-lower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wls-manual-4ep-lower
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1174 | 0.93 | 7 | 1.9683 |
| 1.617 | 2.0 | 15 | 1.6046 |
| 1.5138 | 2.93 | 22 | 1.4859 |
| 1.4474 | 3.73 | 28 | 1.4356 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
The-matt/autumn-shadow-48_60
|
The-matt
| 2023-09-02T15:06:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T15:06:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
NiscR/a2c-PandaReachDense-v3
|
NiscR
| 2023-09-02T15:06:45Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T15:01:15Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DrishtiSharma/mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.001
|
DrishtiSharma
| 2023-09-02T15:04:08Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-09-02T12:48:56Z |
---
license: mit
base_model: facebook/mbart-large-50
tags:
- translation
- generated_from_trainer
metrics:
- bleu
- rouge
model-index:
- name: mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-en-es-translation-lr-1e-05-weight-decay-0.001
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9549
- Bleu: 45.0307
- Rouge: {'rouge1': 0.7049318825090395, 'rouge2': 0.5238048751750992, 'rougeL': 0.684187379601513, 'rougeLsum': 0.6843574853855577}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:----------------------------------------------------------------------------------------------------------------------------:|
| 1.4627 | 1.0 | 4500 | 1.0255 | 42.1880 | {'rouge1': 0.6725633216905762, 'rouge2': 0.48605402524493657, 'rougeL': 0.6498853764470456, 'rougeLsum': 0.6501981166312041} |
| 0.8878 | 2.0 | 9000 | 0.9572 | 44.1734 | {'rouge1': 0.6912686406245903, 'rouge2': 0.5093695171345348, 'rougeL': 0.6701896043455414, 'rougeLsum': 0.6703473419504804} |
| 0.7125 | 3.0 | 13500 | 0.9414 | 44.8709 | {'rouge1': 0.7051197958532004, 'rouge2': 0.5210482863677958, 'rougeL': 0.6843075431636916, 'rougeLsum': 0.6846265298079588} |
| 0.6092 | 4.0 | 18000 | 0.9549 | 45.0821 | {'rouge1': 0.7047932899349161, 'rouge2': 0.523739339466653, 'rougeL': 0.6840127607742443, 'rougeLsum': 0.684202100852132} |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4.dev0
- Tokenizers 0.13.3
|
btamm12/roberta-base-finetuned-wls-manual-3ep
|
btamm12
| 2023-09-02T15:01:54Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T14:59:09Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-wls-manual-3ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-wls-manual-3ep
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8156 | 0.93 | 7 | 1.5116 |
| 1.4371 | 2.0 | 15 | 1.3472 |
| 1.3218 | 2.8 | 21 | 1.3278 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dhinman/poca-SoccerTwos
|
dhinman
| 2023-09-02T15:00:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-09-02T14:59:42Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dhinman/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
btamm12/bert-base-uncased-finetuned-wls-manual-3ep-lower
|
btamm12
| 2023-09-02T14:59:01Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T14:56:34Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-wls-manual-3ep-lower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wls-manual-3ep-lower
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1229 | 0.93 | 7 | 1.9851 |
| 1.635 | 2.0 | 15 | 1.6390 |
| 1.5515 | 2.8 | 21 | 1.5881 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
tsukemono/japanese-stablelm-base-alpha-7b-qlora-marisa
|
tsukemono
| 2023-09-02T14:58:35Z | 0 | 0 | null |
[
"ja",
"region:us"
] | null | 2023-08-28T08:24:30Z |
---
language:
- ja
---
## モデルの概略
霧雨魔理沙とおしゃべりできるモデルです。
[Japanese-StableLM-Base-Alpha-7B](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)のLoRAデータになります
## 使い方
推論のさせかたの一例をhow_to_use.ipynbに記しましたので参考にしていただけると幸いです。
「ユーザー: hogehoge\n魔理沙: 」といったプロンプトを与えてあげることで、魔理沙とおしゃべりができるようになります。
## 備考
これは東方Projectの二次創作です
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
btamm12/bert-base-cased-finetuned-wls-manual-3ep
|
btamm12
| 2023-09-02T14:56:26Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T14:54:00Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-wls-manual-3ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wls-manual-3ep
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1602 | 0.93 | 7 | 1.8592 |
| 1.6456 | 2.0 | 15 | 1.6724 |
| 1.6082 | 2.8 | 21 | 1.4744 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
btamm12/roberta-base-finetuned-wls-manual-2ep
|
btamm12
| 2023-09-02T14:53:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T14:51:11Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-wls-manual-2ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-wls-manual-2ep
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8161 | 0.93 | 7 | 1.5123 |
| 1.4497 | 1.87 | 14 | 1.3929 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
btamm12/bert-base-uncased-finetuned-wls-manual-2ep-lower
|
btamm12
| 2023-09-02T14:51:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T14:48:39Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-wls-manual-2ep-lower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wls-manual-2ep-lower
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1678 | 0.93 | 7 | 2.0527 |
| 1.6854 | 1.87 | 14 | 1.7688 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Therence-NG/Decoder-1b
|
Therence-NG
| 2023-09-02T14:49:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T14:49:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
The-matt/autumn-shadow-48_30
|
The-matt
| 2023-09-02T14:45:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T14:45:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
btamm12/bert-base-cased-finetuned-wls-manual-1ep
|
btamm12
| 2023-09-02T14:42:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-02T14:40:23Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-wls-manual-1ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wls-manual-1ep
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1332 | 0.93 | 7 | 1.9236 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
The-matt/autumn-shadow-48_20
|
The-matt
| 2023-09-02T14:38:29Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T14:38:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Lenouche/JoueurDuGrenier
|
Lenouche
| 2023-09-02T14:31:09Z | 0 | 0 | null |
[
"fr",
"license:openrail",
"region:us"
] | null | 2023-08-13T23:02:23Z |
---
license: openrail
language:
- fr
---
|
The-matt/autumn-shadow-48_10
|
The-matt
| 2023-09-02T14:30:51Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T14:30:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Lenouche/Sblerky
|
Lenouche
| 2023-09-02T14:30:42Z | 0 | 0 | null |
[
"fr",
"license:openrail",
"region:us"
] | null | 2023-08-13T23:01:35Z |
---
license: openrail
language:
- fr
---
|
Lenouche/Conkerax
|
Lenouche
| 2023-09-02T14:30:03Z | 0 | 0 | null |
[
"fr",
"license:openrail",
"region:us"
] | null | 2023-08-13T22:13:05Z |
---
license: openrail
language :
- fr
---
|
Lenouche/GiaTechAndGaming
|
Lenouche
| 2023-09-02T14:28:46Z | 0 | 0 | null |
[
"fr",
"license:openrail",
"region:us"
] | null | 2023-08-17T01:44:54Z |
---
language:
- fr
license: openrail
---
|
Lenouche/DefendIntelligence
|
Lenouche
| 2023-09-02T14:26:44Z | 0 | 0 | null |
[
"fr",
"license:openrail",
"region:us"
] | null | 2023-08-31T00:44:45Z |
---
language:
- fr
license: openrail
---
|
SymeCloud/Llama2-7b-Chat-GGUF
|
SymeCloud
| 2023-09-02T14:25:41Z | 1 | 2 |
transformers
|
[
"transformers",
"llama",
"code",
"llama-2",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T11:59:57Z |
---
license: apache-2.0
language:
- en
tags:
- code
- llama-2
---
# Llama2 Chat 7B - GGUF
- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [Llama 2 7b Chat GGML](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML)
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
Kamer/DuplicatesUnique
|
Kamer
| 2023-09-02T14:24:10Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-02T13:36:09Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: DuplicatesUnique
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DuplicatesUnique
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7513
- eval_Accuracy: 0.3885
- eval_F1_macro: 0.1389
- eval_F1_class_0: 0.8712
- eval_F1_class_1: 0.6667
- eval_F1_class_2: 0.2133
- eval_F1_class_3: 0.0
- eval_F1_class_4: 0.0
- eval_F1_class_5: 0.0
- eval_F1_class_6: 0.0187
- eval_F1_class_7: 0.0
- eval_F1_class_8: 0.0
- eval_F1_class_9: 0.8726
- eval_F1_class_10: 0.0147
- eval_F1_class_11: 0.0
- eval_F1_class_12: 0.1204
- eval_F1_class_13: 0.0
- eval_F1_class_14: 0.0
- eval_F1_class_15: 0.0
- eval_F1_class_16: 0.0
- eval_F1_class_17: 0.0
- eval_F1_class_18: 0.0
- eval_F1_class_19: 0.0
- eval_runtime: 16.4781
- eval_samples_per_second: 68.576
- eval_steps_per_second: 8.618
- epoch: 0.77
- step: 5000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Avenuenw/prompt-tokenizer
|
Avenuenw
| 2023-09-02T14:03:10Z | 0 | 0 | null |
[
"en",
"dataset:daspartho/stable-diffusion-prompts",
"license:apache-2.0",
"region:us"
] | null | 2023-09-02T14:02:05Z |
---
language: en
license: apache-2.0
datasets: daspartho/stable-diffusion-prompts
---
# Prompt Tokenizer
GPT-2 Tokenizer trained on [dataset](https://huggingface.co/datasets/daspartho/stable-diffusion-prompts) of stable diffusion prompts.
|
trieudemo11/llama_7b_attrb_cate_8m_2
|
trieudemo11
| 2023-09-02T13:58:45Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T13:58:29Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
Avenuenw/prompt-extender
|
Avenuenw
| 2023-09-02T13:58:26Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-02T13:52:41Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: prompt-extend
results: []
---
[](https://huggingface.co/spaces/daspartho/prompt-extend)
# Prompt Extend
Text generation model for generating suitable style cues given the main idea for a prompt.
It is a GPT-2 model trained on [dataset](https://huggingface.co/datasets/daspartho/stable-diffusion-prompts) of stable diffusion prompts.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.7436 | 1.0 | 12796 | 2.5429 |
| 2.3292 | 2.0 | 25592 | 2.0711 |
| 1.9439 | 3.0 | 38388 | 1.8447 |
| 1.7059 | 4.0 | 51184 | 1.7325 |
| 1.5775 | 5.0 | 63980 | 1.7110 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
VinayHajare/ppo-LunarLander-v2
|
VinayHajare
| 2023-09-02T13:51:21Z | 5 | 3 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T06:37:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.26 +/- 19.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
# !pip gymnasium huggingface-sb3 stable_baselines3[extra]
import gymnasium as gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.monitor import Monitor
repo_id = "VinayHajare/ppo-LunarLander-v2"
filename = "ppo-LunarLander-v2.zip"
eval_env = gym.make("LunarLander-v2", render_mode="human")
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint,print_system_info=True)
mean_reward, std_reward = evaluate_policy(model,eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Enjoy trained agent
observation, info = eval_env.reset()
for _ in range(1000):
action, _states = model.predict(observation, deterministic=True)
observation, rewards, terminated, truncated, info = eval_env.step(action)
eval_env.render()
```
|
pritam3355/llama2-qlora-finetunined-french
|
pritam3355
| 2023-09-02T13:34:55Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T13:30:27Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
ckandemir/xlm-roberta-base-finetuned-panx-de
|
ckandemir
| 2023-09-02T13:28:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-02T08:51:32Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6993243243243242
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3902
- F1: 0.6993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1085 | 1.0 | 50 | 0.5687 | 0.5579 |
| 0.5001 | 2.0 | 100 | 0.4186 | 0.6781 |
| 0.3535 | 3.0 | 150 | 0.3902 | 0.6993 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jongalon/intel_image_classification_fastai
|
jongalon
| 2023-09-02T13:17:37Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-09-02T13:17:34Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
SaadoN/bert-finetuned-squad
|
SaadoN
| 2023-09-02T13:14:39Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-02T10:57:32Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
HorcruxNo13/vit-base-patch16-224-in21k-finetuned-eurosat
|
HorcruxNo13
| 2023-09-02T13:10:51Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-02T13:01:40Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5802
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 1.0922 | 0.7333 |
| 2.0408 | 2.0 | 16 | 0.6039 | 0.7333 |
| 0.9248 | 3.0 | 24 | 0.5810 | 0.7333 |
| 0.6035 | 4.0 | 32 | 0.5830 | 0.7333 |
| 0.5951 | 5.0 | 40 | 0.5802 | 0.7333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Ahmedhisham/social_bias_Bert
|
Ahmedhisham
| 2023-09-02T13:10:27Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-02T12:32:03Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: social_bias_Bert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# social_bias_Bert
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
LiChenYi/QA
|
LiChenYi
| 2023-09-02T13:05:16Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2023-09-02T12:55:15Z |
---
license: unknown
---
在AI使用过程中,遇到的问题进行记录,供后来者避坑
# 2colab 使用过程的问题
1. 在colab中拉去 huggingface仓库中的数据报如下错误:
Connecting to [huggingface.co](http://huggingface.co/) ([huggingface.co](http://huggingface.co/))|18.239.50.16|:443... connected.
HTTP request sent, awaiting response... 401 Unauthorized
解决方案:
找到huggingface设置,用户的访问请求【User Access requests】:设置为禁用
|
ckandemir/xlm-roberta-base-finetuned-panx-de-fr
|
ckandemir
| 2023-09-02T13:04:30Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-02T12:13:02Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1669
- F1: 0.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3059 | 1.0 | 715 | 0.1894 | 0.8169 |
| 0.148 | 2.0 | 1430 | 0.1663 | 0.8473 |
| 0.0932 | 3.0 | 2145 | 0.1669 | 0.8604 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
penguinman73/xlm-roberta-base-finetuned-panx-all
|
penguinman73
| 2023-09-02T13:01:15Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-02T12:45:50Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1739
- F1: 0.8549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3005 | 1.0 | 835 | 0.1894 | 0.8174 |
| 0.1568 | 2.0 | 1670 | 0.1743 | 0.8382 |
| 0.1027 | 3.0 | 2505 | 0.1739 | 0.8549 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
quantumaikr/KoreanLM-3B
|
quantumaikr
| 2023-09-02T12:55:53Z | 109 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"korean",
"foundation",
"ko",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-21T09:02:18Z |
---
language:
- ko
- en
pipeline_tag: text-generation
tags:
- llama
- korean
- foundation
---
<p align="center" width="100%">
<img src="https://i.imgur.com/snFDU0P.png" alt="KoreanLM icon" style="width: 500px; display: block; margin: auto; border-radius: 10%;">
</p>
# KoreanLM: 한국어 언어모델 프로젝트
KoreanLM은 한국어 언어모델을 개발하기 위한 오픈소스 프로젝트입니다. 현재 대부분의 언어모델들은 영어에 초점을 맞추고 있어, 한국어에 대한 학습이 상대적으로 부족하고 토큰화 과정에서 비효율적인 경우가 있습니다. 이러한 문제를 해결하고 한국어에 최적화된 언어모델을 제공하기 위해 KoreanLM 프로젝트를 시작하게 되었습니다.
## 프로젝트 목표
1. 한국어에 특화된 언어모델 개발: 한국어의 문법, 어휘, 문화적 특성을 반영하여 한국어를 더 정확하게 이해하고 생성할 수 있는 언어모델을 개발합니다.
2. 효율적인 토큰화 방식 도입: 한국어 텍스트의 토큰화 과정에서 효율적이고 정확한 분석이 가능한 새로운 토큰화 방식을 도입하여 언어모델의 성능을 향상시킵니다.
3. 거대 언어모델의 사용성 개선: 현재 거대한 사이즈의 언어모델들은 기업이 자사의 데이터를 파인튜닝하기 어려운 문제가 있습니다. 이를 해결하기 위해 한국어 언어모델의 크기를 조절하여 사용성을 개선하고, 자연어 처리 작업에 더 쉽게 적용할 수 있도록 합니다.
## 사용 방법
다음은 transformers 라이브러리를 통해 모델과 토크나이저를 로딩하는 예제입니다.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained("quantumaikr/KoreanLM-3B")
tokenizer = transformers.AutoTokenizer.from_pretrained("quantumaikr/KoreanLM-3B")
```
## 기술 문의
hi@quantumai.kr
www.quantumai.kr
|
astroid19/ppo-LunarLander-v2
|
astroid19
| 2023-09-02T12:46:19Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T12:45:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.82 +/- 21.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HorcruxNo13/swinv2-small-patch4-window8-256-finetuned-eurosat
|
HorcruxNo13
| 2023-09-02T12:44:00Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-small-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-small-patch4-window8-256",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-02T12:25:25Z |
---
license: apache-2.0
base_model: microsoft/swinv2-small-patch4-window8-256
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-small-patch4-window8-256-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-small-patch4-window8-256-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window8-256](https://huggingface.co/microsoft/swinv2-small-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5868
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 1.1951 | 0.2667 |
| 5.0901 | 2.0 | 16 | 1.4301 | 0.7333 |
| 2.785 | 3.0 | 24 | 1.1514 | 0.2667 |
| 0.8599 | 4.0 | 32 | 0.5810 | 0.7333 |
| 0.6058 | 5.0 | 40 | 0.5868 | 0.7333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
simlamkr1/llama2-simtestmodel1
|
simlamkr1
| 2023-09-02T12:32:06Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-09-01T13:56:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
rrozb/dqn-SpaceInvadersNoFrameskip-v4
|
rrozb
| 2023-09-02T12:22:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T12:21:54Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 597.00 +/- 109.80
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rrozb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rrozb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rrozb
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
NiscR/Reinforce-Pixel1
|
NiscR
| 2023-09-02T12:19:12Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T11:35:10Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixel1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.20 +/- 23.29
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
penguinman73/xlm-roberta-base-finetuned-panx-fr
|
penguinman73
| 2023-09-02T12:18:32Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-02T12:13:41Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2760
- F1: 0.8452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5839 | 1.0 | 191 | 0.3623 | 0.7527 |
| 0.2607 | 2.0 | 382 | 0.2836 | 0.8238 |
| 0.1745 | 3.0 | 573 | 0.2760 | 0.8452 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
penguinman73/xlm-roberta-base-finetuned-panx-de-fr
|
penguinman73
| 2023-09-02T12:12:18Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-02T11:58:38Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1623
- F1: 0.8603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1813 | 0.8232 |
| 0.1482 | 2.0 | 1430 | 0.1586 | 0.8462 |
| 0.0959 | 3.0 | 2145 | 0.1623 | 0.8603 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
penguinman73/xlm-roberta-base-finetuned-panx-de
|
penguinman73
| 2023-09-02T11:56:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-27T01:35:12Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2992
- F1: 0.8285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6098 | 1.0 | 167 | 0.3570 | 0.7592 |
| 0.2633 | 2.0 | 334 | 0.2995 | 0.8171 |
| 0.1792 | 3.0 | 501 | 0.2992 | 0.8285 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
amgodbole/bloom_prompt_tuning_1693653323.8270018
|
amgodbole
| 2023-09-02T11:36:37Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T11:36:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
casque/FilmVelvia3
|
casque
| 2023-09-02T11:34:13Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-02T11:32:49Z |
---
license: creativeml-openrail-m
---
|
Mustain/line_fujiki3
|
Mustain
| 2023-09-02T11:20:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T11:20:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
aikotainaru/Dottore_voice
|
aikotainaru
| 2023-09-02T11:19:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-30T14:02:27Z |
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/IZqQvtZ02gfU6mluiNaXo.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/LDKjkNfr1-TiTY-x-bBnr.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/e3cSWdQkvaorenFMzjmum.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/nA8FlNF6XL-HQZRpqej6o.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/X10FTsq6QHe7nvpNxXStR.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/J_wYluqdk8TlUmIMAK47G.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/eHKM_nJcZ3KfPUDI75SPV.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/wZyvTpMVzjEbfbLwERxPJ.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/RThzUoi1UDFphHZCftU-k.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/zltQYBg6a789iTPWnp7kA.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/tviqk-lhL6a8SXsmQSaxD.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/jOe34I-NhkWwT7Ujf1Njf.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/cFKVq90mezpXbBfJQ_3jf.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/Lt82Q_2OzH1vfE8E-vVsH.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/fJvK7WKHukBXjoaej7XTy.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/21ifaJ6VjYOG-q65PzN-D.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/Sa17c63JUUs050bVGdNUs.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/Xz3MmjlegoqxPTckGwa2T.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/MIAYZSROm9NI_2uSi-Bce.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/CDYJjADDXu0yO2YUhC0Xi.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/WFqkjbogef_A0aX0-vWhu.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/RGIx--NRqDWwPCymxa3fa.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/_VPgY4aquWPo8z8qonwFH.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/VINsRv15uF2uGL6A86yAY.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/Iy7wl52pgyl7ZwWzfgQ1m.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/GPC6lKaEu5qcmBH4-elBj.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/v6Gwr1pwaPd8o7_s2KJsd.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/lHs1JpNYreHnKbMVJJCbe.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/jOfi8zmp18SEej2NePF1s.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/q9i9fWl49Q9tkVTnJShQN.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/R_lVYAgVGO2U6M20JcelE.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/RQkwoh4uFPLlr1FQSAlFA.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/BkoXLAc3Ya7mGLcGq41x-.mpga"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64ef4b7ccf9bec024a5e46f6/omynrc_owbbYJz1308E30.mpga"></audio>
|
goat923/my_awesome_wnut_model
|
goat923
| 2023-09-02T11:16:36Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-02T10:33:21Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: goat923/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# goat923/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1165
- Validation Loss: 0.2494
- Train Precision: 0.6287
- Train Recall: 0.4557
- Train F1: 0.5284
- Train Accuracy: 0.9482
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3527 | 0.3075 | 0.3319 | 0.0945 | 0.1471 | 0.9281 | 0 |
| 0.1583 | 0.2594 | 0.5886 | 0.4211 | 0.4909 | 0.9455 | 1 |
| 0.1165 | 0.2494 | 0.6287 | 0.4557 | 0.5284 | 0.9482 | 2 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dwitidibyajyoti/fine_tune_layoutmlv3_model
|
dwitidibyajyoti
| 2023-09-02T11:15:36Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-30T09:45:10Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- Precision: 0.5109
- Recall: 0.6026
- F1: 0.5529
- Accuracy: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 8.33 | 100 | 0.6800 | 0.3371 | 0.3846 | 0.3593 | 0.7682 |
| No log | 16.67 | 200 | 0.3088 | 0.5204 | 0.6538 | 0.5795 | 0.9156 |
| No log | 25.0 | 300 | 0.2142 | 0.5326 | 0.6282 | 0.5765 | 0.9305 |
| No log | 33.33 | 400 | 0.2301 | 0.5795 | 0.6538 | 0.6145 | 0.9288 |
| 0.4115 | 41.67 | 500 | 0.2426 | 0.5618 | 0.6410 | 0.5988 | 0.9272 |
| 0.4115 | 50.0 | 600 | 0.4171 | 0.6190 | 0.6667 | 0.6420 | 0.8924 |
| 0.4115 | 58.33 | 700 | 0.2265 | 0.5393 | 0.6154 | 0.5749 | 0.9371 |
| 0.4115 | 66.67 | 800 | 0.2869 | 0.5506 | 0.6282 | 0.5868 | 0.9156 |
| 0.4115 | 75.0 | 900 | 0.2633 | 0.5568 | 0.6282 | 0.5904 | 0.9272 |
| 0.0231 | 83.33 | 1000 | 0.2763 | 0.5109 | 0.6026 | 0.5529 | 0.9222 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yaohuacn/a2c-PandaReachDense-v3
|
yaohuacn
| 2023-09-02T11:10:11Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T11:05:12Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aigrils2/primitive0-diffuser
|
aigrils2
| 2023-09-02T11:05:44Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:wangjun/majicmix-realistic-v6",
"base_model:adapter:wangjun/majicmix-realistic-v6",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-02T10:20:37Z |
---
base_model: wangjun/majicmix-realistic-v6
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
pipeline_tag: text-to-image
---
|
madroid/onnx-whisper
|
madroid
| 2023-09-02T11:02:02Z | 0 | 0 | null |
[
"onnx",
"whisper",
"openai",
"license:apache-2.0",
"region:us"
] | null | 2023-09-02T07:14:04Z |
---
license: apache-2.0
tags:
- whisper
- onnx
- openai
---
|
JanSt/gbert-base-finetuned-twitter
|
JanSt
| 2023-09-02T10:57:40Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:deepset/gbert-base",
"base_model:finetune:deepset/gbert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-24T10:58:07Z |
---
license: mit
base_model: deepset/gbert-base
tags:
- generated_from_trainer
model-index:
- name: gbert-base-finetuned-twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-base-finetuned-twitter
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 192
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.194 | 1.0 | 4180 | 1.9622 |
| 2.0075 | 2.0 | 8360 | 1.8813 |
| 1.9429 | 3.0 | 12540 | 1.8339 |
| 1.8985 | 4.0 | 16720 | 1.8057 |
| 1.8676 | 5.0 | 20900 | 1.7801 |
| 1.8446 | 6.0 | 25080 | 1.7793 |
| 1.829 | 7.0 | 29260 | 1.7580 |
| 1.815 | 8.0 | 33440 | 1.7445 |
| 1.8048 | 9.0 | 37620 | 1.7319 |
| 1.7997 | 10.0 | 41800 | 1.7331 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
andrewcho92/helloworld
|
andrewcho92
| 2023-09-02T10:33:10Z | 0 | 0 | null |
[
"text-generation",
"en",
"license:openrail",
"region:us"
] |
text-generation
| 2023-09-02T10:14:37Z |
---
license: openrail
language:
- en
pipeline_tag: text-generation
---
|
adimazuz/q-FrozenLake-v1-4x4-noSlippery
|
adimazuz
| 2023-09-02T10:23:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T10:23:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="adimazuz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jigglesaw/finetuning-sentiment-model-3000-samples
|
jigglesaw
| 2023-09-02T10:16:22Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-02T08:56:24Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.870967741935484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3394
- Accuracy: 0.8667
- F1: 0.8710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
StefanoCaloni/dqn-SpaceInvaders
|
StefanoCaloni
| 2023-09-02T10:04:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T08:32:06Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 299.00 +/- 68.26
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga StefanoCaloni -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga StefanoCaloni -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga StefanoCaloni
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 10000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 100),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
andrei-saceleanu/detr-resnet-50_finetuned_cppe5
|
andrei-saceleanu
| 2023-09-02T10:00:41Z | 187 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-09-02T09:07:57Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
utnah/ckpt
|
utnah
| 2023-09-02T09:33:43Z | 0 | 2 | null |
[
"license:openrail",
"region:us"
] | null | 2022-10-31T12:34:09Z |
---
license: openrail
---
Модели весов для StableDiffusion в формате ckpt.
Для быстрой загрузки в [Google Colab](https://colab.research.google.com/drive/1TC4SSLncPWytSPvquR6Y4-U7wZRfAXrV)
[](https://colab.research.google.com/drive/1TC4SSLncPWytSPvquR6Y4-U7wZRfAXrV)
|
fathercc/majiczhenshi
|
fathercc
| 2023-09-02T09:16:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T12:23:04Z |
---
license: creativeml-openrail-m
---
|
Yntec/DreamLikeRemix
|
Yntec
| 2023-09-02T08:58:22Z | 420 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"anime",
"Dreamlike",
"art",
"Retro",
"Elldreths",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-11T14:26:00Z |
---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- anime
- Dreamlike
- art
- Retro
- Elldreths
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: false
---
# DreamLikeRemix
Samples and prompts:


beautiful background, beautiful detailed girl, Cartoon Pretty CUTE Girl, sitting on a box of cherries, DETAILED CHIBI EYES, holding antique slot machine, detailed hair, Ponytail, key shot at computer monitor, Magazine ad, iconic, 1940, sharp focus. Acrylic art on canvas By KlaysMoji and artgerm and Clay Mann and and leyendecker
A mix of Dreamlike Diffusion and a little bit of Elldreths Retro Mix.
Full recipe:
# Add Difference 1.0
Primary model:
Dreamlike Diffusion
Secondary model:
Elldreths Retro Mix
Tertiary model:
v1-5-pruned-fp16-no-ema
Output Model:
Temporary
# Weighted Sum 0.85
Primary model:
Temporary
Secondary model:
Dreamlike Diffusion
Output Model:
dreamLikeRemix
Original pages:
https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0
https://civitai.com/models/1474/elldreths-retro-mix
|
SunshineYellow/t5-small-finetuned-xsum
|
SunshineYellow
| 2023-09-02T08:37:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:scitldr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-20T06:06:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- scitldr
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scitldr
type: scitldr
config: Abstract
split: validation
args: Abstract
metrics:
- name: Rouge1
type: rouge
value: 24.7942
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the scitldr dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8686
- Rouge1: 24.7942
- Rouge2: 7.8227
- Rougel: 21.2018
- Rougelsum: 21.2779
- Gen Len: 18.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 125 | 4.1327 | 23.5028 | 7.9229 | 19.2335 | 19.2839 | 18.5024 |
| No log | 2.0 | 250 | 4.0197 | 23.4862 | 7.3941 | 19.1734 | 19.2273 | 18.4475 |
| No log | 3.0 | 375 | 3.9659 | 24.0596 | 7.6225 | 20.2649 | 20.3197 | 18.2375 |
| 4.2188 | 4.0 | 500 | 3.9302 | 24.323 | 7.9627 | 20.7527 | 20.8616 | 18.1826 |
| 4.2188 | 5.0 | 625 | 3.9060 | 24.7138 | 7.9075 | 21.1786 | 21.2552 | 18.1939 |
| 4.2188 | 6.0 | 750 | 3.8900 | 24.696 | 7.7986 | 21.161 | 21.2083 | 18.2342 |
| 4.2188 | 7.0 | 875 | 3.8801 | 24.8363 | 7.852 | 21.2452 | 21.3039 | 18.3473 |
| 3.991 | 8.0 | 1000 | 3.8736 | 24.8537 | 7.9099 | 21.2259 | 21.3141 | 18.3845 |
| 3.991 | 9.0 | 1125 | 3.8700 | 24.7938 | 7.8088 | 21.1743 | 21.2603 | 18.4233 |
| 3.991 | 10.0 | 1250 | 3.8686 | 24.7942 | 7.8227 | 21.2018 | 21.2779 | 18.4297 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.12.1
|
922-Narra/llama-2-7b-chat-tagalog-v0.3-gguf
|
922-Narra
| 2023-09-02T08:25:31Z | 19 | 1 | null |
[
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-09-01T09:44:48Z |
---
license: llama2
---
GGUFs of [l27b-chat-tagalog-v0.3](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3). (Primarily tested and run with Koboldcpp v1.41+).
QLora (hf and GGML) [here](https://huggingface.co/922-Narra/tagalog-lm-lora-tests/tree/main/llama-2-7b-chat-tagalog-0.3).
|
Kamer/bert-base-uncased-eurlex
|
Kamer
| 2023-09-02T08:14:26Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/bert-base-uncased-eurlex",
"base_model:finetune:nlpaueb/bert-base-uncased-eurlex",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-02T07:18:39Z |
---
license: cc-by-sa-4.0
base_model: nlpaueb/bert-base-uncased-eurlex
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-eurlex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-eurlex
This model is a fine-tuned version of [nlpaueb/bert-base-uncased-eurlex](https://huggingface.co/nlpaueb/bert-base-uncased-eurlex) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4164
- eval_Accuracy: 0.9224
- eval_F1_macro: 0.9301
- eval_F1_class_0: 0.8941
- eval_F1_class_1: 0.9388
- eval_F1_class_2: 0.9412
- eval_F1_class_3: 0.9730
- eval_F1_class_4: 0.9148
- eval_F1_class_5: 0.9573
- eval_F1_class_6: 0.9399
- eval_F1_class_7: 0.9685
- eval_F1_class_8: 0.9630
- eval_F1_class_9: 0.9495
- eval_F1_class_10: 0.8574
- eval_F1_class_11: 0.9241
- eval_F1_class_12: 0.8677
- eval_F1_class_13: 0.9442
- eval_F1_class_14: 0.9055
- eval_F1_class_15: 0.9022
- eval_F1_class_16: 0.8929
- eval_F1_class_17: 0.9811
- eval_F1_class_18: 0.8870
- eval_F1_class_19: 1.0
- eval_runtime: 154.2922
- eval_samples_per_second: 32.918
- eval_steps_per_second: 4.116
- epoch: 0.52
- step: 3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Ori/lama-2-13b-peft-2wikihop-strategyqa-retrieval-at1
|
Ori
| 2023-09-02T08:09:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2023-09-02T08:05:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Xmm/led-large-16384-cnn_dailymail
|
Xmm
| 2023-09-02T08:09:40Z | 98 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"led",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-17T03:05:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: led-large-16384-cnn_dailymail
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 0.3869876274946419
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-large-16384-cnn_dailymail
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5544
- Rouge1: 0.3870
- Rouge2: 0.1736
- Rougel: 0.2599
- Rougelsum: 0.3653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.9531 | 0.4 | 500 | 1.8639 | 0.3485 | 0.1441 | 0.2275 | 0.3288 |
| 1.9563 | 0.8 | 1000 | 1.8260 | 0.3538 | 0.1482 | 0.2315 | 0.3343 |
| 1.7176 | 1.2 | 1500 | 1.8208 | 0.3628 | 0.1527 | 0.2383 | 0.3433 |
| 1.7197 | 1.6 | 2000 | 1.8162 | 0.3696 | 0.1602 | 0.2434 | 0.3486 |
| 1.8086 | 2.0 | 2500 | 1.7924 | 0.3558 | 0.1533 | 0.2334 | 0.3361 |
| 1.2448 | 2.4 | 3000 | 1.8510 | 0.3703 | 0.1591 | 0.2447 | 0.3483 |
| 1.3574 | 2.8 | 3500 | 1.8277 | 0.3741 | 0.1593 | 0.2422 | 0.3540 |
| 1.0966 | 3.2 | 4000 | 1.8924 | 0.3682 | 0.1576 | 0.2424 | 0.3479 |
| 0.9938 | 3.6 | 4500 | 1.8957 | 0.3723 | 0.1599 | 0.2451 | 0.3511 |
| 1.0735 | 4.0 | 5000 | 1.8772 | 0.3653 | 0.1557 | 0.2399 | 0.3454 |
| 0.9106 | 4.4 | 5500 | 1.9401 | 0.3720 | 0.1585 | 0.2436 | 0.3504 |
| 1.015 | 4.8 | 6000 | 1.9320 | 0.3725 | 0.1570 | 0.2429 | 0.3515 |
| 1.7854 | 0.36 | 6500 | 1.7800 | 0.3624 | 0.1544 | 0.2390 | 0.3422 |
| 1.9079 | 0.39 | 7000 | 1.7629 | 0.3573 | 0.1553 | 0.2352 | 0.3370 |
| 1.7606 | 3.34 | 7500 | 1.6902 | 0.3783 | 0.1673 | 0.2521 | 0.3570 |
| 1.7571 | 3.57 | 8000 | 1.6563 | 0.3802 | 0.1691 | 0.2538 | 0.3587 |
| 1.6602 | 3.79 | 8500 | 1.6439 | 0.3814 | 0.1693 | 0.2548 | 0.3600 |
| 1.6614 | 4.01 | 9000 | 1.6312 | 0.3812 | 0.1691 | 0.2544 | 0.3599 |
| 1.668 | 4.24 | 9500 | 1.6189 | 0.3815 | 0.1689 | 0.2550 | 0.3603 |
| 1.6491 | 4.46 | 10000 | 1.6172 | 0.3799 | 0.1681 | 0.2540 | 0.3586 |
| 1.5994 | 4.68 | 10500 | 1.6132 | 0.3825 | 0.1702 | 0.2560 | 0.3610 |
| 1.6493 | 4.9 | 11000 | 1.6093 | 0.3828 | 0.1701 | 0.2561 | 0.3613 |
| 1.6769 | 5.13 | 11500 | 1.6074 | 0.3831 | 0.1706 | 0.2569 | 0.3619 |
| 1.6554 | 5.35 | 12000 | 1.6044 | 0.3817 | 0.1695 | 0.2559 | 0.3605 |
| 1.6155 | 5.57 | 12500 | 1.6010 | 0.3825 | 0.1700 | 0.2561 | 0.3608 |
| 1.5863 | 5.8 | 13000 | 1.5981 | 0.3829 | 0.1704 | 0.2569 | 0.3614 |
| 1.6306 | 6.02 | 13500 | 1.6004 | 0.3831 | 0.1702 | 0.2563 | 0.3618 |
| 1.6425 | 6.24 | 14000 | 1.5987 | 0.3821 | 0.1698 | 0.2561 | 0.3610 |
| 1.6863 | 6.46 | 14500 | 1.5876 | 0.3837 | 0.1710 | 0.2569 | 0.3622 |
| 1.6085 | 6.69 | 15000 | 1.5815 | 0.3836 | 0.1717 | 0.2573 | 0.3621 |
| 1.6267 | 6.91 | 15500 | 1.5792 | 0.3852 | 0.1722 | 0.2579 | 0.3633 |
| 1.5637 | 7.13 | 16000 | 1.5768 | 0.3830 | 0.1709 | 0.2568 | 0.3611 |
| 1.5586 | 7.36 | 16500 | 1.5740 | 0.3833 | 0.1706 | 0.2567 | 0.3617 |
| 1.5389 | 7.58 | 17000 | 1.5689 | 0.3858 | 0.1729 | 0.2590 | 0.3640 |
| 1.5694 | 7.8 | 17500 | 1.5645 | 0.3853 | 0.1731 | 0.2589 | 0.3636 |
| 1.5265 | 8.02 | 18000 | 1.5621 | 0.3871 | 0.1733 | 0.2596 | 0.3654 |
| 1.5273 | 8.25 | 18500 | 1.5624 | 0.3861 | 0.1726 | 0.2588 | 0.3646 |
| 1.5148 | 8.47 | 19000 | 1.5602 | 0.3866 | 0.1733 | 0.2592 | 0.3651 |
| 1.532 | 8.69 | 19500 | 1.5599 | 0.3859 | 0.1732 | 0.2593 | 0.3642 |
| 1.5113 | 8.92 | 20000 | 1.5602 | 0.3877 | 0.1748 | 0.2606 | 0.3658 |
| 1.5133 | 9.14 | 20500 | 1.5595 | 0.3855 | 0.1725 | 0.2587 | 0.3637 |
| 1.4875 | 9.36 | 21000 | 1.5572 | 0.3873 | 0.1741 | 0.2600 | 0.3654 |
| 1.5038 | 9.59 | 21500 | 1.5557 | 0.3860 | 0.1728 | 0.2590 | 0.3641 |
| 1.5062 | 9.81 | 22000 | 1.5544 | 0.3870 | 0.1736 | 0.2599 | 0.3653 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu118
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Hemanth-thunder/kazuki_kurusu_lora_xl
|
Hemanth-thunder
| 2023-09-02T08:02:49Z | 1 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-02T06:23:41Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of a kazuki kurusu
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Hemanth-thunder/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of a kazuki kurusu using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
StefanoCaloni/q-FrozenLake-v1-4x4-noSlippery
|
StefanoCaloni
| 2023-09-02T07:42:36Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T06:35:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="StefanoCaloni/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
StefanoCaloni/taxi
|
StefanoCaloni
| 2023-09-02T07:42:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T06:40:39Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="StefanoCaloni/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
squarelike/Gugugo-koja-1.3B-V0.95
|
squarelike
| 2023-09-02T07:31:26Z | 67 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"translation",
"ja",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-31T14:17:12Z |
---
license: apache-2.0
language:
- ja
- ko
pipeline_tag: translation
---
[https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo)
Prompt Template:
```
### 한국어: {sentence}</끝>
### 일본어:
```
```
### 일본어: {sentence}</끝>
### 한국어:
```
|
xalphaai/llama2-qlora-finetunined
|
xalphaai
| 2023-09-02T07:20:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T07:19:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
jackswie/sadie_sink
|
jackswie
| 2023-09-02T06:59:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-02T06:50:29Z |
[](discord.gg/ailab)


# Sadie Sink - RVC V2 - Rmvpe - 750 Epoch
**Oyuncu Sadie Sink'in ses modelidir,
Rvc V2 750 epoch olarak eğitilmiştir.**
**5 Dakikalık Dataset Kullanılmıştır.**
**Dataset içerisinde konuşma ses örnekleri bulunmaktadır.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: jackswie
- Reddit: u/jackk_m
- YouTube: 𝖏𝖆𝖈𝖐𝖘𝖑𝖜𝖐 (https://www.youtube.com/channel/UCZSMJToEeMuqMFDL318v3Xw)
- TikTok: jackss.aep (https://www.tiktok.com/@jackss.aep)
- Instagram: jackslwk (https://www.instagram.com/jackslwk/)

[](discord.gg/ailab)

|
Jakir057/finetuned-indian-food
|
Jakir057
| 2023-09-02T06:53:08Z | 192 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-02T06:19:35Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-indian-food
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0026
- Accuracy: 0.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7056 | 0.1 | 100 | 0.5113 | 0.8881 |
| 0.3027 | 0.21 | 200 | 0.1280 | 0.9796 |
| 0.2823 | 0.31 | 300 | 0.1580 | 0.9656 |
| 0.3273 | 0.42 | 400 | 0.0879 | 0.9837 |
| 0.1808 | 0.52 | 500 | 0.0812 | 0.9822 |
| 0.2101 | 0.63 | 600 | 0.0339 | 0.9937 |
| 0.1495 | 0.73 | 700 | 0.0568 | 0.9833 |
| 0.1296 | 0.84 | 800 | 0.0629 | 0.9844 |
| 0.1462 | 0.94 | 900 | 0.0886 | 0.9733 |
| 0.0519 | 1.04 | 1000 | 0.0544 | 0.9870 |
| 0.3192 | 1.15 | 1100 | 0.0892 | 0.9726 |
| 0.158 | 1.25 | 1200 | 0.0632 | 0.98 |
| 0.0266 | 1.36 | 1300 | 0.0233 | 0.9944 |
| 0.1832 | 1.46 | 1400 | 0.0292 | 0.9930 |
| 0.1212 | 1.57 | 1500 | 0.0489 | 0.9852 |
| 0.0994 | 1.67 | 1600 | 0.0142 | 0.9974 |
| 0.0219 | 1.78 | 1700 | 0.0277 | 0.9930 |
| 0.0664 | 1.88 | 1800 | 0.0158 | 0.9974 |
| 0.0834 | 1.99 | 1900 | 0.0124 | 0.9978 |
| 0.1093 | 2.09 | 2000 | 0.0140 | 0.9974 |
| 0.1726 | 2.19 | 2100 | 0.0147 | 0.9963 |
| 0.0476 | 2.3 | 2200 | 0.0058 | 0.9993 |
| 0.0257 | 2.4 | 2300 | 0.0424 | 0.9911 |
| 0.0215 | 2.51 | 2400 | 0.0076 | 0.9989 |
| 0.0748 | 2.61 | 2500 | 0.0099 | 0.9974 |
| 0.0059 | 2.72 | 2600 | 0.0053 | 0.9993 |
| 0.0527 | 2.82 | 2700 | 0.0149 | 0.9963 |
| 0.0203 | 2.93 | 2800 | 0.0041 | 0.9993 |
| 0.0791 | 3.03 | 2900 | 0.0033 | 0.9989 |
| 0.0389 | 3.13 | 3000 | 0.0033 | 0.9989 |
| 0.0459 | 3.24 | 3100 | 0.0044 | 0.9989 |
| 0.0276 | 3.34 | 3200 | 0.0031 | 0.9996 |
| 0.0139 | 3.45 | 3300 | 0.0028 | 0.9996 |
| 0.0076 | 3.55 | 3400 | 0.0055 | 0.9985 |
| 0.0097 | 3.66 | 3500 | 0.0027 | 0.9996 |
| 0.0193 | 3.76 | 3600 | 0.0026 | 0.9996 |
| 0.0471 | 3.87 | 3700 | 0.0027 | 0.9996 |
| 0.0282 | 3.97 | 3800 | 0.0027 | 0.9996 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dt-and-vanilla-ardt/ardt-vanilla-robust_train_walker2d_level-0209_0608-99
|
dt-and-vanilla-ardt
| 2023-09-02T06:36:38Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T05:10:31Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-robust_train_walker2d_level-0209_0608-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-robust_train_walker2d_level-0209_0608-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GyanPrakashKushwaha/Sentiment-Analysis
|
GyanPrakashKushwaha
| 2023-09-02T06:26:34Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-09-02T06:26:34Z |
---
license: bigscience-openrail-m
---
|
budecosystem/genz-70b
|
budecosystem
| 2023-09-02T06:03:21Z | 2,642 | 30 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-21T11:36:04Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
---
<div align="center"><h1 align="center">~ GenZ ~</h1><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/genz-logo.png" width=150></div>
<p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p>
---
## Introduction 🎉
Welcome to **GenZ**, an advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 70B parameter model. At Bud Ecosystem, we believe in the power of open-source collaboration to drive the advancement of technology at an accelerated pace. Our vision is to democratize access to fine-tuned LLMs, and to that end, we will be releasing a series of models across different parameter counts (7B, 13B, and 70B) and quantizations (32-bit and 4-bit) for the open-source community to use, enhance, and build upon.
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_compare.png" width="500"></p>
The smaller quantization version of our models makes them more accessible, enabling their use even on personal computers. This opens up a world of possibilities for developers, researchers, and enthusiasts to experiment with these models and contribute to the collective advancement of language model technology.
GenZ isn't just a powerful text generator—it's a sophisticated AI assistant, capable of understanding and responding to user prompts with high-quality responses. We've taken the robust capabilities of Llama V2 and fine-tuned them to offer a more user-focused experience. Whether you're seeking informative responses or engaging interactions, GenZ is designed to deliver.
And this isn't the end. It's just the beginning of a journey towards creating more advanced, more efficient, and more accessible language models. We invite you to join us on this exciting journey. 🚀
---
<h2>Milestone Releases ️🏁</h2>
**[21 August 2023]**
[_GenZ-70B_](https://huggingface.co/budecosystem/genz-70b) : We're excited to announce the release of our Genz 70BB model. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-70b).
**[27 July 2023]**
[_GenZ-13B V2 (ggml)_](https://huggingface.co/budecosystem/genz-13b-v2-ggml) : Announcing our GenZ-13B v2 with ggml. This variant of GenZ can run inferencing using only CPU and without the need of GPU. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-ggml).
**[27 July 2023]**
[_GenZ-13B V2 (4-bit)_](https://huggingface.co/budecosystem/genz-13b-v2-4bit) : Announcing our GenZ-13B v2 with 4-bit quantisation. Enabling inferencing with much lesser GPU memory than the 32-bit variant. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-4bit).
**[26 July 2023]**
[_GenZ-13B V2_](https://huggingface.co/budecosystem/genz-13b-v2) : We're excited to announce the release of our Genz 13B v2 model, a step forward with improved evaluation results compared to v1. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2).
**[20 July 2023]**
[_GenZ-13B_](https://huggingface.co/budecosystem/genz-13b) : We marked an important milestone with the release of the Genz 13B model. The journey began here, and you can partake in it by downloading the model from [Hugging Face](https://huggingface.co/budecosystem/genz-13b).
---
<h2>Evaluations 🎯</h2>
Evaluating our model is a key part of our fine-tuning process. It helps us understand how our model is performing and how it stacks up against other models. Here's a look at some of the key evaluations for GenZ 70B:
<h3>Benchmark Comparison</h3>
We've compared GenZ models to understand the improvements our fine-tuning has achieved.
| Model Name | MT Bench | MMLU | Human Eval | BBH |
|:----------:|:--------:|:----:|:----------:|:----:|
| Genz 13B | 6.12 | 53.62| 17.68 | 37.76|
| Genz 13B v2| 6.79 | 53.68| 21.95 | 38.1 |
| Genz 70B | 7.33 | 70.32| 37.8 |54.69 |
<h3>MT Bench Score</h3>
A key evaluation metric we use is the MT Bench score. This score provides a comprehensive assessment of our model's performance across a range of tasks.
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_score.png" width="500"></p>
---
<h2>Getting Started on Hugging Face 🤗</h2>
Getting up and running with our models on Hugging Face is a breeze. Follow these steps:
<h3>1️⃣ : Import necessary modules</h3>
Start by importing the necessary modules from the ‘transformers’ library and ‘torch’.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/genz-70b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("budecosystem/genz-70b", torch_dtype=torch.bfloat16, rope_scaling={"type": "dynamic", "factor": 2})
prompt = "### User:\nWrite a python flask code for login management\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
Want to interact with the model in a more intuitive way? We have a Gradio interface set up for that. Head over to our GitHub page, clone the repository, and run the ‘generate.py’ script to try it out. Happy experimenting! 😄
<h2>Why Use GenZ? 💡</h2>
You might be wondering, "Why should I choose GenZ over a pretrained model?" The answer lies in the extra mile we've gone to fine-tune our models.
While pretrained models are undeniably powerful, GenZ brings something extra to the table. We've fine-tuned it with curated datasets, which means it has additional skills and capabilities beyond what a pretrained model can offer. Whether you need it for a simple task or a complex project, GenZ is up for the challenge.
What's more, we are committed to continuously enhancing GenZ. We believe in the power of constant learning and improvement. That's why we'll be regularly fine-tuning our models with various curated datasets to make them even better. Our goal is to reach the state of the art and beyond - and we're committed to staying the course until we get there.
But don't just take our word for it. We've provided detailed evaluations and performance details in a later section, so you can see the difference for yourself.
Choose GenZ and join us on this journey. Together, we can push the boundaries of what's possible with large language models.
---
<h2>Model Card for GenZ 70B 📄</h2>
Here's a quick overview of everything you need to know about GenZ 70B.
<h3>Model Details:</h3>
- Developed by: Bud Ecosystem
- Base pretrained model type: Llama V2 70B
- Model Architecture: GenZ 70B, fine-tuned on Llama V2 70B, is an auto-regressive language model that employs an optimized transformer architecture. The fine-tuning process for GenZ 70B leveraged Supervised Fine-Tuning (SFT)
- License: The model is available for commercial use under a custom commercial license. For more information, please visit: [Meta AI Model and Library Downloads](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
---
<h2>Intended Use 💼</h2>
When we created GenZ 70B, we had a clear vision of how it could be used to push the boundaries of what's possible with large language models. We also understand the importance of using such models responsibly. Here's a brief overview of the intended and out-of-scope uses for GenZ 70B.
<h3>Direct Use</h3>
GenZ 70B is designed to be a powerful tool for research on large language models. It's also an excellent foundation for further specialization and fine-tuning for specific use cases, such as:
- Text summarization
- Text generation
- Chatbot creation
- And much more!
<h3>Out-of-Scope Use 🚩</h3>
While GenZ 70B is versatile, there are certain uses that are out of scope:
- Production use without adequate assessment of risks and mitigation
- Any use cases which may be considered irresponsible or harmful
- Use in any manner that violates applicable laws or regulations, including trade compliance laws
- Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2
Remember, GenZ 70B, like any large language model, is trained on a large-scale corpora representative of the web, and therefore, may carry the stereotypes and biases commonly encountered online.
<h3>Recommendations 🧠</h3>
We recommend users of GenZ 70B to consider fine-tuning it for the specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 70B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment.
---
<h2>Training Details 📚</h2>
When fine-tuning GenZ 70B, we took a meticulous approach to ensure we were building on the solid base of the pretrained Llama V2 70B model in the most effective way. Here's a look at the key details of our training process:
<h3>Fine-Tuning Training Data</h3>
For the fine-tuning process, we used a carefully curated mix of datasets. These included data from OpenAssistant, an instruction fine-tuning dataset, and Thought Source for the Chain Of Thought (CoT) approach. This diverse mix of data sources helped us enhance the model's capabilities across a range of tasks.
<h3>Hyperparameters</h3>
Here are the hyperparameters we used for fine-tuning:
| Hyperparameter | Value |
| -------------- | ----- |
| Warmup Ratio | 0.04 |
| Learning Rate Scheduler Type | Cosine |
| Learning Rate | 2e-5 |
| Number of Training Epochs | 3 |
| Per Device Training Batch Size | 4 |
| Gradient Accumulation Steps | 4 |
| Precision | FP16 |
| Optimizer | AdamW |
---
<h2>Looking Ahead 👀</h2>
We're excited about the journey ahead with GenZ. We're committed to continuously improving and enhancing our models, and we're excited to see what the open-source community will build with them. We believe in the power of collaboration, and we can't wait to see what we can achieve together.
Remember, we're just getting started. This is just the beginning of a journey that we believe will revolutionize the world of large language models. We invite you to join us on this exciting journey. Together, we can push the boundaries of what's possible with AI. 🚀
---
Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ)
|
Hellstar1337/freyaLoRA
|
Hellstar1337
| 2023-09-02T05:45:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-02T05:41:39Z |
---
license: creativeml-openrail-m
---
|
jmhessel/cosmo-v2-7b
|
jmhessel
| 2023-09-02T05:39:26Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T05:39:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
Imxxn/AudioCourseU6-TextToSpeech
|
Imxxn
| 2023-09-02T05:38:00Z | 80 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-02T05:18:20Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: AudioCourseU6-TextToSpeech
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AudioCourseU6-TextToSpeech
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
flytech/platistil
|
flytech
| 2023-09-02T05:18:38Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"region:us"
] | null | 2023-09-01T04:11:58Z |
---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
model-index:
- name: platistil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platistil
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
substratusai/weaviate-gorilla-v3
|
substratusai
| 2023-09-02T05:13:22Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-01T22:50:07Z |
## Prompt
```
{input}
{output}
```
Example: of entry used for finetuning
```
Your task is to write an API request for a new schema given the API reference and an example. The user command is: "Get me the details of 2 music tracks that are similar to the given vector." Here is the API reference for a query that will help with this command and an example of how to use it: {Get {JeopardyQuestion (limit: 2,nearVector: {vector: [-0.0125526935, -0.021168863, -0.01076519, ...]}}}}} Could you please formulate this query for the following schema? {"class": "Track","description": "A music track.","properties": [{"name": "trackId","dataType": ["uuid"],"description": "A unique identifier for each track.","moduleConfig": {"text2vec-transformers": {"skip": true,"vectorizeClassName": false,"vectorizePropertyName": false}}{"name": "title","dataType": ["text"],"description": "The title of the track.","moduleConfig": {"text2vec-transformers": {"skip": false,"vectorizeClassName": false,"vectorizePropertyName": false}}{"name": "duration","dataType": ["int"],"description": "The duration of the track in seconds.","moduleConfig": {"text2vec-transformers": {"skip": true,"vectorizeClassName": false,"vectorizePropertyName": false}}{"name": "artist","dataType": ["Artist"],"description": "The artist of the track.","moduleConfig": {"text2vec-transformers": {"skip": true,"vectorizeClassName": false,"vectorizePropertyName": false}}{"name": "album","dataType": ["Album"],"description": "The album of the track.","moduleConfig": {"text2vec-transformers": {"skip": true,"vectorizeClassName": false,"vectorizePropertyName": false}}}} VERY IMPORTANT! Please only output the GraphQL for the query and nothing else!
{ Get { Track ( limit: 2, nearVector: { vector: [-0.0125526935, -0.021168863, -0.01076519, ...] } ) { trackId title duration artist { artistId name } album { albumId title } } }}
```
|
dt-and-vanilla-ardt/ardt-vanilla-robust_train_walker2d_level-0209_0437-66
|
dt-and-vanilla-ardt
| 2023-09-02T05:08:32Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T03:38:44Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-robust_train_walker2d_level-0209_0437-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-robust_train_walker2d_level-0209_0437-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vita-group/llama-2-7b_wanda_unstructured
|
vita-group
| 2023-09-02T05:03:35Z | 10 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-09-01T15:05:46Z |
---
license: mit
---
# Compressed LLM Model Zone
The models are prepared by [Visual Informatics Group @ University of Texas at Austin (VITA-group)](https://vita-group.github.io/). Credits to Ajay Jaiswal, Zhenyu Zhang.
License: [MIT License](https://opensource.org/license/mit/)
Setup environment
```shell
pip install torch==2.0.0+cu117 torchvision==0.15.1+cu117 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117
pip install transformers==4.31.0
pip install accelerate
```
How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = 'llama-2-7b'
comp_method = 'magnitude_unstructured'
comp_degree = 0.2
model_path = f'vita-group/{base_model}_{comp_method}'
model = AutoModelForCausalLM.from_pretrained(
model_path,
revision=f's{comp_degree}',
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
input_ids = tokenizer('Hello! I am a VITA-compressed-LLM chatbot!', return_tensors='pt').input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
| | Base Model | Model Size | Compression Method | Compression Degree |
|---:|:-------------|:-------------|:----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| 0 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.1) |
| 1 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.2) |
| 2 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.3) |
| 3 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.5) |
| 4 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.6) |
| 5 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.1) |
| 6 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.2) |
| 7 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.3) |
| 8 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.5) |
| 9 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.6) |
| 10 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.1) |
| 11 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.2) |
| 12 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.3) |
| 13 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.5) |
| 14 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.6) |
|
vita-group/llama-2-7b_magnitude_unstructured
|
vita-group
| 2023-09-02T05:03:13Z | 9 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-09-01T15:03:37Z |
---
license: mit
---
# Compressed LLM Model Zone
The models are prepared by [Visual Informatics Group @ University of Texas at Austin (VITA-group)](https://vita-group.github.io/). Credits to Ajay Jaiswal, Zhenyu Zhang.
License: [MIT License](https://opensource.org/license/mit/)
Setup environment
```shell
pip install torch==2.0.0+cu117 torchvision==0.15.1+cu117 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117
pip install transformers==4.31.0
pip install accelerate
```
How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = 'llama-2-7b'
comp_method = 'magnitude_unstructured'
comp_degree = 0.2
model_path = f'vita-group/{base_model}_{comp_method}'
model = AutoModelForCausalLM.from_pretrained(
model_path,
revision=f's{comp_degree}',
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
input_ids = tokenizer('Hello! I am a VITA-compressed-LLM chatbot!', return_tensors='pt').input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
| | Base Model | Model Size | Compression Method | Compression Degree |
|---:|:-------------|:-------------|:----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| 0 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.1) |
| 1 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.2) |
| 2 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.3) |
| 3 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.5) |
| 4 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.6) |
| 5 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.1) |
| 6 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.2) |
| 7 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.3) |
| 8 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.5) |
| 9 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.6) |
| 10 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.1) |
| 11 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.2) |
| 12 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.3) |
| 13 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.5) |
| 14 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.6) |
|
xiaoygv/xiaos
|
xiaoygv
| 2023-09-02T04:56:48Z | 0 | 0 |
asteroid
|
[
"asteroid",
"dataset:PygmalionAI/PIPPA",
"license:afl-3.0",
"region:us"
] | null | 2023-09-02T04:55:25Z |
---
license: afl-3.0
datasets:
- PygmalionAI/PIPPA
metrics:
- bleu
library_name: asteroid
---
|
minh21/results
|
minh21
| 2023-09-02T04:56:45Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"region:us"
] | null | 2023-09-01T07:33:03Z |
---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 860 | nan |
| 0.0 | 2.0 | 1720 | nan |
| 0.0 | 3.0 | 2580 | nan |
| 0.0 | 4.0 | 3440 | nan |
| 0.0 | 5.0 | 4300 | nan |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vita-group/llama-2-7b_wanda_2_4_gptq_4bit_128g
|
vita-group
| 2023-09-02T04:55:38Z | 7 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-02T04:52:54Z |
---
license: mit
---
# Compressed LLM Model Zone
The models are prepared by [Visual Informatics Group @ University of Texas at Austin (VITA-group)](https://vita-group.github.io/).
License: [MIT License](https://opensource.org/license/mit/)
Setup environment
```shell
pip install torch==2.0.0+cu117 torchvision==0.15.1+cu117 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117
pip install transformers==4.31.0
pip install accelerate
pip install auto-gptq # for gptq
```
How to use pruned models
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = 'llama-2-7b'
comp_method = 'magnitude_unstructured'
comp_degree = 0.2
model_path = f'vita-group/{base_model}_{comp_method}'
model = AutoModelForCausalLM.from_pretrained(
model_path,
revision=f's{comp_degree}',
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
input_ids = tokenizer('Hello! I am a VITA-compressed-LLM chatbot!', return_tensors='pt').input_ids.cuda()
outputs = model.generate(input_ids, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
```
How to use quantized models
```python
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_path = 'vita-group/llama-2-7b_wanda_2_4_gptq_4bit_128g'
model = AutoGPTQForCausalLM.from_quantized(
model_path,
# inject_fused_attention=False, # or
disable_exllama=True,
device_map='auto',
)
```
| | Base Model | Model Size | Compression Method | Compression Degree |
|---:|:-------------|:-------------|:----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| 0 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.1) |
| 1 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.2) |
| 2 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.3) |
| 3 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.5) |
| 4 | Llama-2 | 7b | [magnitude_unstructured](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_magnitude_unstructured/tree/s0.6) |
| 5 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.1) |
| 6 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.2) |
| 7 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.3) |
| 8 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.5) |
| 9 | Llama-2 | 7b | [sparsegpt_unstructured](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_sparsegpt_unstructured/tree/s0.6) |
| 10 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.1](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.1) |
| 11 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.2](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.2) |
| 12 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.3](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.3) |
| 13 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.5](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.5) |
| 14 | Llama-2 | 7b | [wanda_unstructured](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured) | [s0.6](https://huggingface.co/vita-group/llama-2-7b_wanda_unstructured/tree/s0.6) |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.