modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bigmorning/whisper_charsplit_new_round3__0060
|
bigmorning
| 2023-08-14T07:34:51Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T07:34:45Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0060
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0060
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0009
- Train Accuracy: 0.0795
- Train Wermet: 7.4969
- Validation Loss: 0.5584
- Validation Accuracy: 0.0771
- Validation Wermet: 6.7292
- Epoch: 59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
| 0.0080 | 0.0794 | 8.3693 | 0.5947 | 0.0762 | 7.3034 | 34 |
| 0.0063 | 0.0794 | 8.2517 | 0.5491 | 0.0769 | 7.1324 | 35 |
| 0.0008 | 0.0795 | 7.9115 | 0.5447 | 0.0771 | 6.9422 | 36 |
| 0.0002 | 0.0795 | 7.6265 | 0.5471 | 0.0771 | 6.8107 | 37 |
| 0.0001 | 0.0795 | 7.6685 | 0.5493 | 0.0771 | 6.6914 | 38 |
| 0.0001 | 0.0795 | 7.6100 | 0.5515 | 0.0771 | 6.7738 | 39 |
| 0.0000 | 0.0795 | 7.6623 | 0.5535 | 0.0771 | 6.7829 | 40 |
| 0.0000 | 0.0795 | 7.6768 | 0.5556 | 0.0771 | 6.8287 | 41 |
| 0.0000 | 0.0795 | 7.7199 | 0.5578 | 0.0772 | 6.8398 | 42 |
| 0.0000 | 0.0795 | 7.7423 | 0.5600 | 0.0772 | 6.8518 | 43 |
| 0.0000 | 0.0795 | 7.7561 | 0.5617 | 0.0772 | 6.8898 | 44 |
| 0.0000 | 0.0795 | 7.7766 | 0.5639 | 0.0772 | 6.8982 | 45 |
| 0.0000 | 0.0795 | 7.7962 | 0.5659 | 0.0772 | 6.9091 | 46 |
| 0.0000 | 0.0795 | 7.8106 | 0.5680 | 0.0772 | 6.9293 | 47 |
| 0.0000 | 0.0795 | 7.8387 | 0.5701 | 0.0772 | 6.9401 | 48 |
| 0.0000 | 0.0795 | 7.8480 | 0.5724 | 0.0772 | 6.9544 | 49 |
| 0.0000 | 0.0795 | 7.8755 | 0.5744 | 0.0772 | 6.9767 | 50 |
| 0.0000 | 0.0795 | 7.8924 | 0.5770 | 0.0772 | 6.9928 | 51 |
| 0.0000 | 0.0795 | 7.9169 | 0.5794 | 0.0772 | 7.0149 | 52 |
| 0.0000 | 0.0795 | 7.9400 | 0.5822 | 0.0772 | 7.0438 | 53 |
| 0.0000 | 0.0795 | 7.9697 | 0.5846 | 0.0772 | 7.0785 | 54 |
| 0.0000 | 0.0795 | 8.0061 | 0.5875 | 0.0772 | 7.0840 | 55 |
| 0.0000 | 0.0795 | 8.0364 | 0.5907 | 0.0772 | 7.0683 | 56 |
| 0.0113 | 0.0793 | 7.8674 | 0.5714 | 0.0768 | 6.0540 | 57 |
| 0.0030 | 0.0795 | 7.4853 | 0.5586 | 0.0770 | 6.6707 | 58 |
| 0.0009 | 0.0795 | 7.4969 | 0.5584 | 0.0771 | 6.7292 | 59 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Den4ikAI/sbert_large_mt_ru_retriever
|
Den4ikAI
| 2023-08-14T07:34:25Z | 2,565 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ru",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-08T05:49:23Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: 'query: Когда родился Пушкин?'
sentences:
- >-
passage: Алекса́ндр Серге́евич Пу́шкин (26 мая [6 июня] 1799, Москва — 29
января [10 февраля] 1837, Санкт-Петербург) — русский поэт, драматург и
прозаик, заложивший основы русского реалистического направления[2],
литературный критик[3] и теоретик литературы, историк[3], публицист,
журналист[3].
- 'passage: Пушкин ловил кайф со своими друзьями'
- >-
passage: Пушкин из самых авторитетных литературных деятелей первой трети XIX
века. Ещё при жизни Пушкина сложилась его репутация величайшего
национального русского поэта[4][5]. Пушкин рассматривается как
основоположник современного русского литературного языка[~ 2].
license: mit
language:
- ru
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3622 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3622,
"weight_decay": 1e-05
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Den4ikAI/rubert-tiny2-retriever
|
Den4ikAI
| 2023-08-14T07:33:22Z | 2 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ru",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-07T13:33:54Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: mit
language:
- ru
widget:
- source_sentence: "query: Когда родился Пушкин?"
sentences:
- "passage: Алекса́ндр Серге́евич Пу́шкин (26 мая [6 июня] 1799, Москва — 29 января [10 февраля] 1837, Санкт-Петербург) — русский поэт, драматург и прозаик, заложивший основы русского реалистического направления[2], литературный критик[3] и теоретик литературы, историк[3], публицист, журналист[3]."
- "passage: Пушкин ловил кайф со своими друзьями"
- "passage: Пушкин из самых авторитетных литературных деятелей первой трети XIX века. Ещё при жизни Пушкина сложилась его репутация величайшего национального русского поэта[4][5]. Пушкин рассматривается как основоположник современного русского литературного языка[~ 2]."
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 312 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 966 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 966,
"weight_decay": 1e-05
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 312, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
bookbot/sherpa-ncnn-pruned-transducer-stateless7-streaming-id
|
bookbot
| 2023-08-14T07:32:20Z | 0 | 1 | null |
[
"icefall",
"sherpa-ncnn",
"phoneme-recognition",
"automatic-speech-recognition",
"id",
"dataset:mozilla-foundation/common_voice_13_0",
"dataset:indonesian-nlp/librivox-indonesia",
"dataset:google/fleurs",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2023-06-23T07:58:15Z |
---
language: id
license: apache-2.0
tags:
- icefall
- sherpa-ncnn
- phoneme-recognition
- automatic-speech-recognition
datasets:
- mozilla-foundation/common_voice_13_0
- indonesian-nlp/librivox-indonesia
- google/fleurs
---
# Sherpa-ncnn Pruned Stateless Zipformer RNN-T Streaming ID
Sherpa-ncnn Pruned Stateless Zipformer RNN-T Streaming ID is an automatic speech recognition model trained on the following datasets:
- [Common Voice ID](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0)
- [LibriVox Indonesia](https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia)
- [FLEURS ID](https://huggingface.co/datasets/google/fleurs)
Instead of being trained to predict sequences of words, this model was trained to predict sequence of phonemes, e.g. `['p', 'ə', 'r', 'b', 'u', 'a', 't', 'a', 'n', 'ɲ', 'a']`. Therefore, the model's [vocabulary](https://huggingface.co/bookbot/pruned-transducer-stateless7-streaming-id/blob/main/data/lang_phone/tokens.txt) contains the different IPA phonemes found in [g2p ID](https://github.com/bookbot-kids/g2p_id).
This model was converted from the TorchScript version of [Pruned Stateless Zipformer RNN-T Streaming ID](https://huggingface.co/bookbot/pruned-transducer-stateless7-streaming-id) to ncnn format.
## Converting from TorchScript
Refer to the [official instructions](https://icefall.readthedocs.io/en/latest/model-export/export-ncnn-zipformer.html) for conversion to ncnn, which includes installation of `csukuangfj`'s [ncnn](https://github.com/csukuangfj/ncnn) fork.
## Frameworks
- [k2](https://github.com/k2-fsa/k2)
- [icefall](https://github.com/bookbot-hive/icefall)
- [lhotse](https://github.com/bookbot-hive/lhotse)
- [sherpa-ncnn](https://github.com/k2-fsa/sherpa-ncnn)
- [ncnn](https://github.com/csukuangfj/ncnn)
|
ihgn/Discriminator-Paraphrase
|
ihgn
| 2023-08-14T07:30:21Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-14T01:16:25Z |
---
pipeline_tag: text-classification
---
tokenizer = BartTokenizer.from_pretrained('ihgn/paraphrase-detection')
model = BartForConditionalGeneration.from_pretrained("ihgn/paraphrase-detection").to(device)
source_sentence = "This was a series of nested angular standards , so that measurements in azimuth and elevation could be done directly in polar coordinates relative to the ecliptic."
target_paraphrase = "This was a series of nested polar scales , so that measurements in azimuth and elevation could be performed directly in angular coordinates relative to the ecliptic"
def paraphrase_detection(model, tokenizer, source_sentence, target_paraphrase):
# Tokenize the input sentence
inputs = tokenizer.encode_plus(source_sentence + ' <sep> ' + target_paraphrase, return_tensors='pt')
# Classify the input using the model
with torch.no_grad():
outputs = model.generate(inputs['input_ids'].to(device))
# Get the predicted label
predicted_label = 1 if generated_text == '1' else 0
print("Predicted Label:", predicted_label)
paraphrase_detection(model, tokenizer, source_sentence, target_paraphrase)
|
bigmorning/whisper_charsplit_new_round3__0058
|
bigmorning
| 2023-08-14T07:26:41Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T07:26:32Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0058
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0058
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0113
- Train Accuracy: 0.0793
- Train Wermet: 7.8674
- Validation Loss: 0.5714
- Validation Accuracy: 0.0768
- Validation Wermet: 6.0540
- Epoch: 57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
| 0.0080 | 0.0794 | 8.3693 | 0.5947 | 0.0762 | 7.3034 | 34 |
| 0.0063 | 0.0794 | 8.2517 | 0.5491 | 0.0769 | 7.1324 | 35 |
| 0.0008 | 0.0795 | 7.9115 | 0.5447 | 0.0771 | 6.9422 | 36 |
| 0.0002 | 0.0795 | 7.6265 | 0.5471 | 0.0771 | 6.8107 | 37 |
| 0.0001 | 0.0795 | 7.6685 | 0.5493 | 0.0771 | 6.6914 | 38 |
| 0.0001 | 0.0795 | 7.6100 | 0.5515 | 0.0771 | 6.7738 | 39 |
| 0.0000 | 0.0795 | 7.6623 | 0.5535 | 0.0771 | 6.7829 | 40 |
| 0.0000 | 0.0795 | 7.6768 | 0.5556 | 0.0771 | 6.8287 | 41 |
| 0.0000 | 0.0795 | 7.7199 | 0.5578 | 0.0772 | 6.8398 | 42 |
| 0.0000 | 0.0795 | 7.7423 | 0.5600 | 0.0772 | 6.8518 | 43 |
| 0.0000 | 0.0795 | 7.7561 | 0.5617 | 0.0772 | 6.8898 | 44 |
| 0.0000 | 0.0795 | 7.7766 | 0.5639 | 0.0772 | 6.8982 | 45 |
| 0.0000 | 0.0795 | 7.7962 | 0.5659 | 0.0772 | 6.9091 | 46 |
| 0.0000 | 0.0795 | 7.8106 | 0.5680 | 0.0772 | 6.9293 | 47 |
| 0.0000 | 0.0795 | 7.8387 | 0.5701 | 0.0772 | 6.9401 | 48 |
| 0.0000 | 0.0795 | 7.8480 | 0.5724 | 0.0772 | 6.9544 | 49 |
| 0.0000 | 0.0795 | 7.8755 | 0.5744 | 0.0772 | 6.9767 | 50 |
| 0.0000 | 0.0795 | 7.8924 | 0.5770 | 0.0772 | 6.9928 | 51 |
| 0.0000 | 0.0795 | 7.9169 | 0.5794 | 0.0772 | 7.0149 | 52 |
| 0.0000 | 0.0795 | 7.9400 | 0.5822 | 0.0772 | 7.0438 | 53 |
| 0.0000 | 0.0795 | 7.9697 | 0.5846 | 0.0772 | 7.0785 | 54 |
| 0.0000 | 0.0795 | 8.0061 | 0.5875 | 0.0772 | 7.0840 | 55 |
| 0.0000 | 0.0795 | 8.0364 | 0.5907 | 0.0772 | 7.0683 | 56 |
| 0.0113 | 0.0793 | 7.8674 | 0.5714 | 0.0768 | 6.0540 | 57 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
CyberHarem/hyuuga_hinata_naruto
|
CyberHarem
| 2023-08-14T07:11:38Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/hyuuga_hinata_naruto",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-14T07:08:01Z |
---
license: mit
datasets:
- CyberHarem/hyuuga_hinata_naruto
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hyuuga_hinata_naruto
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/hyuuga_hinata_naruto.pt` as the embedding and `1500/hyuuga_hinata_naruto.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `hyuuga_hinata_naruto`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/hyuuga_hinata_naruto.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/hyuuga_hinata_naruto.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/hyuuga_hinata_naruto.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/hyuuga_hinata_naruto.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/hyuuga_hinata_naruto.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/hyuuga_hinata_naruto.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/hyuuga_hinata_naruto.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/hyuuga_hinata_naruto.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/hyuuga_hinata_naruto.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/hyuuga_hinata_naruto.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/hyuuga_hinata_naruto.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/hyuuga_hinata_naruto.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/hyuuga_hinata_naruto.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/hyuuga_hinata_naruto.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/hyuuga_hinata_naruto.zip) |
|
orhay1/RVC_Amamiya_Sora
|
orhay1
| 2023-08-14T07:10:30Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-23T19:34:21Z |
---
license: openrail
---
RVC V2 model for the japanese voice actress and singer Amamiya Sora
|
bigmorning/whisper_charsplit_new_round3__0054
|
bigmorning
| 2023-08-14T07:10:05Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T07:09:55Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0054
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0054
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.9400
- Validation Loss: 0.5822
- Validation Accuracy: 0.0772
- Validation Wermet: 7.0438
- Epoch: 53
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
| 0.0080 | 0.0794 | 8.3693 | 0.5947 | 0.0762 | 7.3034 | 34 |
| 0.0063 | 0.0794 | 8.2517 | 0.5491 | 0.0769 | 7.1324 | 35 |
| 0.0008 | 0.0795 | 7.9115 | 0.5447 | 0.0771 | 6.9422 | 36 |
| 0.0002 | 0.0795 | 7.6265 | 0.5471 | 0.0771 | 6.8107 | 37 |
| 0.0001 | 0.0795 | 7.6685 | 0.5493 | 0.0771 | 6.6914 | 38 |
| 0.0001 | 0.0795 | 7.6100 | 0.5515 | 0.0771 | 6.7738 | 39 |
| 0.0000 | 0.0795 | 7.6623 | 0.5535 | 0.0771 | 6.7829 | 40 |
| 0.0000 | 0.0795 | 7.6768 | 0.5556 | 0.0771 | 6.8287 | 41 |
| 0.0000 | 0.0795 | 7.7199 | 0.5578 | 0.0772 | 6.8398 | 42 |
| 0.0000 | 0.0795 | 7.7423 | 0.5600 | 0.0772 | 6.8518 | 43 |
| 0.0000 | 0.0795 | 7.7561 | 0.5617 | 0.0772 | 6.8898 | 44 |
| 0.0000 | 0.0795 | 7.7766 | 0.5639 | 0.0772 | 6.8982 | 45 |
| 0.0000 | 0.0795 | 7.7962 | 0.5659 | 0.0772 | 6.9091 | 46 |
| 0.0000 | 0.0795 | 7.8106 | 0.5680 | 0.0772 | 6.9293 | 47 |
| 0.0000 | 0.0795 | 7.8387 | 0.5701 | 0.0772 | 6.9401 | 48 |
| 0.0000 | 0.0795 | 7.8480 | 0.5724 | 0.0772 | 6.9544 | 49 |
| 0.0000 | 0.0795 | 7.8755 | 0.5744 | 0.0772 | 6.9767 | 50 |
| 0.0000 | 0.0795 | 7.8924 | 0.5770 | 0.0772 | 6.9928 | 51 |
| 0.0000 | 0.0795 | 7.9169 | 0.5794 | 0.0772 | 7.0149 | 52 |
| 0.0000 | 0.0795 | 7.9400 | 0.5822 | 0.0772 | 7.0438 | 53 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
morell23/arcanestyle
|
morell23
| 2023-08-14T06:58:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-14T06:58:13Z |
---
license: creativeml-openrail-m
---
|
bigmorning/whisper_charsplit_new_round3__0051
|
bigmorning
| 2023-08-14T06:57:31Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T06:57:22Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0051
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0051
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.8755
- Validation Loss: 0.5744
- Validation Accuracy: 0.0772
- Validation Wermet: 6.9767
- Epoch: 50
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
| 0.0080 | 0.0794 | 8.3693 | 0.5947 | 0.0762 | 7.3034 | 34 |
| 0.0063 | 0.0794 | 8.2517 | 0.5491 | 0.0769 | 7.1324 | 35 |
| 0.0008 | 0.0795 | 7.9115 | 0.5447 | 0.0771 | 6.9422 | 36 |
| 0.0002 | 0.0795 | 7.6265 | 0.5471 | 0.0771 | 6.8107 | 37 |
| 0.0001 | 0.0795 | 7.6685 | 0.5493 | 0.0771 | 6.6914 | 38 |
| 0.0001 | 0.0795 | 7.6100 | 0.5515 | 0.0771 | 6.7738 | 39 |
| 0.0000 | 0.0795 | 7.6623 | 0.5535 | 0.0771 | 6.7829 | 40 |
| 0.0000 | 0.0795 | 7.6768 | 0.5556 | 0.0771 | 6.8287 | 41 |
| 0.0000 | 0.0795 | 7.7199 | 0.5578 | 0.0772 | 6.8398 | 42 |
| 0.0000 | 0.0795 | 7.7423 | 0.5600 | 0.0772 | 6.8518 | 43 |
| 0.0000 | 0.0795 | 7.7561 | 0.5617 | 0.0772 | 6.8898 | 44 |
| 0.0000 | 0.0795 | 7.7766 | 0.5639 | 0.0772 | 6.8982 | 45 |
| 0.0000 | 0.0795 | 7.7962 | 0.5659 | 0.0772 | 6.9091 | 46 |
| 0.0000 | 0.0795 | 7.8106 | 0.5680 | 0.0772 | 6.9293 | 47 |
| 0.0000 | 0.0795 | 7.8387 | 0.5701 | 0.0772 | 6.9401 | 48 |
| 0.0000 | 0.0795 | 7.8480 | 0.5724 | 0.0772 | 6.9544 | 49 |
| 0.0000 | 0.0795 | 7.8755 | 0.5744 | 0.0772 | 6.9767 | 50 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
yadhikari/yogesh-a-v2
|
yadhikari
| 2023-08-14T06:56:14Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-14T06:50:18Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### yogesh-a-v2 Dreambooth model trained by yadhikari with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
mainuzzaman/llama-2-7b-miniguanaco
|
mainuzzaman
| 2023-08-14T06:54:33Z | 0 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-14T06:48:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
bigmorning/whisper_charsplit_new_round3__0050
|
bigmorning
| 2023-08-14T06:53:17Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T06:53:09Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0050
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0050
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.8480
- Validation Loss: 0.5724
- Validation Accuracy: 0.0772
- Validation Wermet: 6.9544
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
| 0.0080 | 0.0794 | 8.3693 | 0.5947 | 0.0762 | 7.3034 | 34 |
| 0.0063 | 0.0794 | 8.2517 | 0.5491 | 0.0769 | 7.1324 | 35 |
| 0.0008 | 0.0795 | 7.9115 | 0.5447 | 0.0771 | 6.9422 | 36 |
| 0.0002 | 0.0795 | 7.6265 | 0.5471 | 0.0771 | 6.8107 | 37 |
| 0.0001 | 0.0795 | 7.6685 | 0.5493 | 0.0771 | 6.6914 | 38 |
| 0.0001 | 0.0795 | 7.6100 | 0.5515 | 0.0771 | 6.7738 | 39 |
| 0.0000 | 0.0795 | 7.6623 | 0.5535 | 0.0771 | 6.7829 | 40 |
| 0.0000 | 0.0795 | 7.6768 | 0.5556 | 0.0771 | 6.8287 | 41 |
| 0.0000 | 0.0795 | 7.7199 | 0.5578 | 0.0772 | 6.8398 | 42 |
| 0.0000 | 0.0795 | 7.7423 | 0.5600 | 0.0772 | 6.8518 | 43 |
| 0.0000 | 0.0795 | 7.7561 | 0.5617 | 0.0772 | 6.8898 | 44 |
| 0.0000 | 0.0795 | 7.7766 | 0.5639 | 0.0772 | 6.8982 | 45 |
| 0.0000 | 0.0795 | 7.7962 | 0.5659 | 0.0772 | 6.9091 | 46 |
| 0.0000 | 0.0795 | 7.8106 | 0.5680 | 0.0772 | 6.9293 | 47 |
| 0.0000 | 0.0795 | 7.8387 | 0.5701 | 0.0772 | 6.9401 | 48 |
| 0.0000 | 0.0795 | 7.8480 | 0.5724 | 0.0772 | 6.9544 | 49 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
TheTravellingEngineer/llama2-7b-chat-hf-dpo
|
TheTravellingEngineer
| 2023-08-14T06:50:53Z | 1,530 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-14T06:33:07Z |
The base model is meta's Llama-2-7b-chat-hf. It was finetuned using DPO and the comparison_gpt4 dataset and the model prompt is similar to the original Guanaco model.
This repo contains the merged fp16 model.
**Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.**
---
- license:
- llama2 <br>
- datasets:
- comparison_gpt4 <br>
- language:
- en <br>
- reference: https://github.com/hiyouga/LLaMA-Efficient-Tuning/tree/main
---
|
bigmorning/whisper_charsplit_new_round3__0048
|
bigmorning
| 2023-08-14T06:45:00Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T06:44:51Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0048
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0048
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.8106
- Validation Loss: 0.5680
- Validation Accuracy: 0.0772
- Validation Wermet: 6.9293
- Epoch: 47
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
| 0.0080 | 0.0794 | 8.3693 | 0.5947 | 0.0762 | 7.3034 | 34 |
| 0.0063 | 0.0794 | 8.2517 | 0.5491 | 0.0769 | 7.1324 | 35 |
| 0.0008 | 0.0795 | 7.9115 | 0.5447 | 0.0771 | 6.9422 | 36 |
| 0.0002 | 0.0795 | 7.6265 | 0.5471 | 0.0771 | 6.8107 | 37 |
| 0.0001 | 0.0795 | 7.6685 | 0.5493 | 0.0771 | 6.6914 | 38 |
| 0.0001 | 0.0795 | 7.6100 | 0.5515 | 0.0771 | 6.7738 | 39 |
| 0.0000 | 0.0795 | 7.6623 | 0.5535 | 0.0771 | 6.7829 | 40 |
| 0.0000 | 0.0795 | 7.6768 | 0.5556 | 0.0771 | 6.8287 | 41 |
| 0.0000 | 0.0795 | 7.7199 | 0.5578 | 0.0772 | 6.8398 | 42 |
| 0.0000 | 0.0795 | 7.7423 | 0.5600 | 0.0772 | 6.8518 | 43 |
| 0.0000 | 0.0795 | 7.7561 | 0.5617 | 0.0772 | 6.8898 | 44 |
| 0.0000 | 0.0795 | 7.7766 | 0.5639 | 0.0772 | 6.8982 | 45 |
| 0.0000 | 0.0795 | 7.7962 | 0.5659 | 0.0772 | 6.9091 | 46 |
| 0.0000 | 0.0795 | 7.8106 | 0.5680 | 0.0772 | 6.9293 | 47 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
foduucom/product-detection-in-shelf-yolov8
|
foduucom
| 2023-08-14T06:44:19Z | 36 | 13 |
ultralytics
|
[
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"retail",
"shelf-detection",
"mart",
"mall",
"inventory-management",
"en",
"model-index",
"region:us"
] |
object-detection
| 2023-08-12T14:11:30Z |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- retail
- shelf-detection
- mart
- mall
- inventory-management
library_name: ultralytics
library_version: 8.0.43
inference: false
model-index:
- name: foduucom/shelf-object-detection-yolov8
results:
- task:
type: object-detection
metrics:
- type: precision
value: 0.91
name: mAP@0.5(box)
language:
- en
pipeline_tag: object-detection
---
<div align="center">
<img width="640" alt="foduucom/product-detection-in-shelf-yolov8" src="https://huggingface.co/foduucom/product-detection-in-shelf-yolov8/resolve/main/thumbnail.jpg">
</div>
# Model Card for YOLOv8 Shelf Object Detection in Retail Environments
## Model Enthusiasm 🎉
Hey there, retail rockstar! 👋 If you're ready to make your mart or mall experience a whole lot cooler, give this YOLOv8 Shelf Object Detection model a virtual high-five! 🙌 Your shelves will never be the same again, and neither will your customers' smiles.
## Model Magic ✨
The YOLOv8 Shelf Object Detection model is your new retail sidekick! It doesn't just detect objects; it's got a sixth sense for finding what you need on those shelves. Whether it's a jar of pickles or the latest gadget, this model's got you covered. And hey, it's a pro at counting too! So, say goodbye to empty spaces and hello to perfectly organized retail enchantment.
## Supported Labels 🏬
```
['Empty Shelves', 'Magical Products']
```
## Collaboration Love ❤️
We're all about that collaboration groove! If you're as excited about this model as we are (and trust us, it's hard not to be), show some love with a thumbs up 👍. Let's work together to make retail dreams come true!
## Uses
### Direct Use
Integrate this model into your retail kingdom for real-time inventory harmony, shelf perfection, and automated restocking magic.
### Downstream Wonder
Want to optimize shelf layouts, unravel product placement mysteries, and sprinkle some sparkle into your customers' lives? This model's got your back!
### Not-So-Magic Disclaimers ⚡
Just like a trusty wizard, this model might have its quirky moments:
- It might not be in sync with tricky lighting and shelf chaos. Keep those shelves tidy!
- Rapid changes in product vibes and shelf dances could affect its accuracy and spellcasting.
### Human Touch & Wizard Wisdom 🧙
Remember, every spellcaster has their quirks. Test and twirl within your retail realm before letting it loose on the magical stage.
## How to Join the Magic
To dive into the retail wizardry with the YOLOv8 Shelf Object Detection model, follow these enchanted steps:
```bash
pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
```
- Summon the model and unveil its secrets:
```python
# Wave your wand (or keyboard) to get started!
from ultralyticsplus import YOLO, render_result
import cv2
# Cast a spell to summon the model
model = YOLO('foduucom/shelf-object-detection-yolov8')
# Tweak the magical parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = "path/to/your/shelf/image"
#you can pass live camera streaming or video
# Begin the mystical journey through video frames
# (Remember to have your retail tapestry ready)
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Unleash the magic of YOLOv8
results = model(frame)
# Showcase the magic on the frame
annotated_frame = results[0].plot()
# Present the enchanted frame
cv2.imshow("YOLOv8 Retail Wizardry", annotated_frame)
# Dispel the spell if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
# Return to reality when the video ends
break
# Release your captive video and close the portal
cap.release()
cv2.destroyAllWindows()
```
## Model Masters 🧙♂️
The mystical YOLOv8 Shelf Object Detection model was crafted by wizards at FODUU AI.
```bibtex
@ModelCard{
author = {Nehul Agrawal and
Pranjal Singh Thakur},
title = {YOLOv8 Shelf Object Detection in Retail Environments},
year = {2023}
}
```
Join the retail magic and send your owl to info@foduu.com for any questions or enchanting contributions.
Feel free to adjust the humor and tone as needed to match the vibe you want for your model card. Enjoy your retail adventures! 🛒✨
|
bjfxs/llama2-7b-200steps-finetunined-sxl-1
|
bjfxs
| 2023-08-14T06:41:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-14T06:41:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_round3__0047
|
bigmorning
| 2023-08-14T06:40:45Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T06:40:37Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0047
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0047
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.7962
- Validation Loss: 0.5659
- Validation Accuracy: 0.0772
- Validation Wermet: 6.9091
- Epoch: 46
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
| 0.0080 | 0.0794 | 8.3693 | 0.5947 | 0.0762 | 7.3034 | 34 |
| 0.0063 | 0.0794 | 8.2517 | 0.5491 | 0.0769 | 7.1324 | 35 |
| 0.0008 | 0.0795 | 7.9115 | 0.5447 | 0.0771 | 6.9422 | 36 |
| 0.0002 | 0.0795 | 7.6265 | 0.5471 | 0.0771 | 6.8107 | 37 |
| 0.0001 | 0.0795 | 7.6685 | 0.5493 | 0.0771 | 6.6914 | 38 |
| 0.0001 | 0.0795 | 7.6100 | 0.5515 | 0.0771 | 6.7738 | 39 |
| 0.0000 | 0.0795 | 7.6623 | 0.5535 | 0.0771 | 6.7829 | 40 |
| 0.0000 | 0.0795 | 7.6768 | 0.5556 | 0.0771 | 6.8287 | 41 |
| 0.0000 | 0.0795 | 7.7199 | 0.5578 | 0.0772 | 6.8398 | 42 |
| 0.0000 | 0.0795 | 7.7423 | 0.5600 | 0.0772 | 6.8518 | 43 |
| 0.0000 | 0.0795 | 7.7561 | 0.5617 | 0.0772 | 6.8898 | 44 |
| 0.0000 | 0.0795 | 7.7766 | 0.5639 | 0.0772 | 6.8982 | 45 |
| 0.0000 | 0.0795 | 7.7962 | 0.5659 | 0.0772 | 6.9091 | 46 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
msthil2/distilhubert-finetuned-gtzan
|
msthil2
| 2023-08-14T06:29:19Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-13T20:35:13Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5620
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9979 | 1.0 | 113 | 1.8250 | 0.39 |
| 1.3648 | 2.0 | 226 | 1.3015 | 0.58 |
| 1.0783 | 3.0 | 339 | 0.9586 | 0.78 |
| 0.8267 | 4.0 | 452 | 0.8479 | 0.74 |
| 0.7503 | 5.0 | 565 | 0.7404 | 0.76 |
| 0.404 | 6.0 | 678 | 0.6402 | 0.81 |
| 0.4935 | 7.0 | 791 | 0.5936 | 0.81 |
| 0.2201 | 8.0 | 904 | 0.5934 | 0.82 |
| 0.2689 | 9.0 | 1017 | 0.5614 | 0.81 |
| 0.1843 | 10.0 | 1130 | 0.5620 | 0.84 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0044
|
bigmorning
| 2023-08-14T06:28:19Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T06:28:11Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0044
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0044
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.7423
- Validation Loss: 0.5600
- Validation Accuracy: 0.0772
- Validation Wermet: 6.8518
- Epoch: 43
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
| 0.0080 | 0.0794 | 8.3693 | 0.5947 | 0.0762 | 7.3034 | 34 |
| 0.0063 | 0.0794 | 8.2517 | 0.5491 | 0.0769 | 7.1324 | 35 |
| 0.0008 | 0.0795 | 7.9115 | 0.5447 | 0.0771 | 6.9422 | 36 |
| 0.0002 | 0.0795 | 7.6265 | 0.5471 | 0.0771 | 6.8107 | 37 |
| 0.0001 | 0.0795 | 7.6685 | 0.5493 | 0.0771 | 6.6914 | 38 |
| 0.0001 | 0.0795 | 7.6100 | 0.5515 | 0.0771 | 6.7738 | 39 |
| 0.0000 | 0.0795 | 7.6623 | 0.5535 | 0.0771 | 6.7829 | 40 |
| 0.0000 | 0.0795 | 7.6768 | 0.5556 | 0.0771 | 6.8287 | 41 |
| 0.0000 | 0.0795 | 7.7199 | 0.5578 | 0.0772 | 6.8398 | 42 |
| 0.0000 | 0.0795 | 7.7423 | 0.5600 | 0.0772 | 6.8518 | 43 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
samaksh-khatri-crest-data/gmra_model_gpt2_14082023T112228
|
samaksh-khatri-crest-data
| 2023-08-14T06:27:56Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-14T05:52:28Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gmra_model_gpt2_14082023T112228
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gmra_model_gpt2_14082023T112228
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3424
- Accuracy: 0.9016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 71 | 0.7440 | 0.7636 |
| No log | 1.99 | 142 | 0.5466 | 0.8278 |
| No log | 2.99 | 213 | 0.4379 | 0.8656 |
| No log | 4.0 | 285 | 0.3959 | 0.8787 |
| No log | 5.0 | 356 | 0.3560 | 0.8919 |
| No log | 5.99 | 427 | 0.3442 | 0.8946 |
| No log | 6.99 | 498 | 0.3535 | 0.8954 |
| 0.5012 | 8.0 | 570 | 0.3232 | 0.9007 |
| 0.5012 | 9.0 | 641 | 0.3364 | 0.8989 |
| 0.5012 | 9.96 | 710 | 0.3424 | 0.9016 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sara98/bert-finetuned-mrpc-trainerclass
|
sara98
| 2023-08-14T06:24:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-14T06:08:34Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: bert-finetuned-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
morell23/crrysxmrky
|
morell23
| 2023-08-14T06:20:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-14T06:15:44Z |
---
license: creativeml-openrail-m
---
|
ColDan/ppo-LunarLander-v2
|
ColDan
| 2023-08-14T06:17:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-14T06:17:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.29 +/- 23.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bigmorning/whisper_charsplit_new_round3__0041
|
bigmorning
| 2023-08-14T06:15:42Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T06:15:35Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0041
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0041
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.6623
- Validation Loss: 0.5535
- Validation Accuracy: 0.0771
- Validation Wermet: 6.7829
- Epoch: 40
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
| 0.0080 | 0.0794 | 8.3693 | 0.5947 | 0.0762 | 7.3034 | 34 |
| 0.0063 | 0.0794 | 8.2517 | 0.5491 | 0.0769 | 7.1324 | 35 |
| 0.0008 | 0.0795 | 7.9115 | 0.5447 | 0.0771 | 6.9422 | 36 |
| 0.0002 | 0.0795 | 7.6265 | 0.5471 | 0.0771 | 6.8107 | 37 |
| 0.0001 | 0.0795 | 7.6685 | 0.5493 | 0.0771 | 6.6914 | 38 |
| 0.0001 | 0.0795 | 7.6100 | 0.5515 | 0.0771 | 6.7738 | 39 |
| 0.0000 | 0.0795 | 7.6623 | 0.5535 | 0.0771 | 6.7829 | 40 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
anikesh-mane/prefix-tuned-flan-t5-large
|
anikesh-mane
| 2023-08-14T06:13:10Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-14T06:13:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
CyberHarem/haruno_sakura_naruto
|
CyberHarem
| 2023-08-14T06:10:12Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/haruno_sakura_naruto",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-14T06:06:00Z |
---
license: mit
datasets:
- CyberHarem/haruno_sakura_naruto
pipeline_tag: text-to-image
tags:
- art
---
# Lora of haruno_sakura_naruto
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/haruno_sakura_naruto.pt` as the embedding and `1500/haruno_sakura_naruto.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `haruno_sakura_naruto`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/haruno_sakura_naruto.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/haruno_sakura_naruto.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/haruno_sakura_naruto.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/haruno_sakura_naruto.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/haruno_sakura_naruto.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/haruno_sakura_naruto.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/haruno_sakura_naruto.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/haruno_sakura_naruto.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/haruno_sakura_naruto.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/haruno_sakura_naruto.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/haruno_sakura_naruto.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/haruno_sakura_naruto.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/haruno_sakura_naruto.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/haruno_sakura_naruto.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/haruno_sakura_naruto.zip) |
|
bigmorning/whisper_charsplit_new_round3__0034
|
bigmorning
| 2023-08-14T05:46:38Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T05:46:30Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0034
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0034
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 8.2872
- Validation Loss: 0.5744
- Validation Accuracy: 0.0772
- Validation Wermet: 7.2069
- Epoch: 33
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
| 0.0000 | 0.0795 | 8.2872 | 0.5744 | 0.0772 | 7.2069 | 33 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0033
|
bigmorning
| 2023-08-14T05:42:28Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T05:42:19Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0033
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0033
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 8.3059
- Validation Loss: 0.5721
- Validation Accuracy: 0.0772
- Validation Wermet: 7.2341
- Epoch: 32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
| 0.0000 | 0.0795 | 8.2992 | 0.5700 | 0.0772 | 7.2006 | 31 |
| 0.0000 | 0.0795 | 8.3059 | 0.5721 | 0.0772 | 7.2341 | 32 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0031
|
bigmorning
| 2023-08-14T05:34:02Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T05:33:54Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0031
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0031
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 8.2607
- Validation Loss: 0.5689
- Validation Accuracy: 0.0771
- Validation Wermet: 7.2107
- Epoch: 30
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
| 0.0000 | 0.0795 | 8.2607 | 0.5689 | 0.0771 | 7.2107 | 30 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0030
|
bigmorning
| 2023-08-14T05:29:51Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T05:29:43Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0030
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0030
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 8.2571
- Validation Loss: 0.5667
- Validation Accuracy: 0.0771
- Validation Wermet: 7.1787
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
| 0.0000 | 0.0795 | 8.2332 | 0.5633 | 0.0771 | 7.1736 | 27 |
| 0.0000 | 0.0795 | 8.2573 | 0.5648 | 0.0771 | 7.2086 | 28 |
| 0.0000 | 0.0795 | 8.2571 | 0.5667 | 0.0771 | 7.1787 | 29 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Yntec/QToriReloaded
|
Yntec
| 2023-08-14T05:20:14Z | 640 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"agntperseus",
"TkskKurumi",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-28T22:27:49Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- agntperseus
- TkskKurumi
---
# QTori Reloaded
QTori LORA merged in with RMHF 2.5D-V2.
Original pages:
https://civitai.com/models/15179/qtori-style-lora
https://civitai.com/models/101518?modelVersionId=110456
|
CyberHarem/uzumaki_kushina_naruto
|
CyberHarem
| 2023-08-14T05:17:45Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/uzumaki_kushina_naruto",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-14T05:13:40Z |
---
license: mit
datasets:
- CyberHarem/uzumaki_kushina_naruto
pipeline_tag: text-to-image
tags:
- art
---
# Lora of uzumaki_kushina_naruto
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/uzumaki_kushina_naruto.pt` as the embedding and `1500/uzumaki_kushina_naruto.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `uzumaki_kushina_naruto`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------------------|
| 1500 |  | [<NSFW, click to see>](1500/previews/pattern_2.png) | [<NSFW, click to see>](1500/previews/bikini.png) |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/uzumaki_kushina_naruto.zip) |
| 1400 |  | [<NSFW, click to see>](1400/previews/pattern_2.png) | [<NSFW, click to see>](1400/previews/bikini.png) |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/uzumaki_kushina_naruto.zip) |
| 1300 |  | [<NSFW, click to see>](1300/previews/pattern_2.png) | [<NSFW, click to see>](1300/previews/bikini.png) |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/uzumaki_kushina_naruto.zip) |
| 1200 |  | [<NSFW, click to see>](1200/previews/pattern_2.png) | [<NSFW, click to see>](1200/previews/bikini.png) |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/uzumaki_kushina_naruto.zip) |
| 1100 |  | [<NSFW, click to see>](1100/previews/pattern_2.png) | [<NSFW, click to see>](1100/previews/bikini.png) |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/uzumaki_kushina_naruto.zip) |
| 1000 |  | [<NSFW, click to see>](1000/previews/pattern_2.png) | [<NSFW, click to see>](1000/previews/bikini.png) |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/uzumaki_kushina_naruto.zip) |
| 900 |  | [<NSFW, click to see>](900/previews/pattern_2.png) | [<NSFW, click to see>](900/previews/bikini.png) |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/uzumaki_kushina_naruto.zip) |
| 800 |  | [<NSFW, click to see>](800/previews/pattern_2.png) | [<NSFW, click to see>](800/previews/bikini.png) |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/uzumaki_kushina_naruto.zip) |
| 700 |  | [<NSFW, click to see>](700/previews/pattern_2.png) | [<NSFW, click to see>](700/previews/bikini.png) |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/uzumaki_kushina_naruto.zip) |
| 600 |  | [<NSFW, click to see>](600/previews/pattern_2.png) | [<NSFW, click to see>](600/previews/bikini.png) |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/uzumaki_kushina_naruto.zip) |
| 500 |  | [<NSFW, click to see>](500/previews/pattern_2.png) | [<NSFW, click to see>](500/previews/bikini.png) |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/uzumaki_kushina_naruto.zip) |
| 400 |  | [<NSFW, click to see>](400/previews/pattern_2.png) | [<NSFW, click to see>](400/previews/bikini.png) |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/uzumaki_kushina_naruto.zip) |
| 300 |  | [<NSFW, click to see>](300/previews/pattern_2.png) | [<NSFW, click to see>](300/previews/bikini.png) |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/uzumaki_kushina_naruto.zip) |
| 200 |  | [<NSFW, click to see>](200/previews/pattern_2.png) | [<NSFW, click to see>](200/previews/bikini.png) |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/uzumaki_kushina_naruto.zip) |
| 100 |  | [<NSFW, click to see>](100/previews/pattern_2.png) | [<NSFW, click to see>](100/previews/bikini.png) |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/uzumaki_kushina_naruto.zip) |
|
bigmorning/whisper_charsplit_new_round3__0027
|
bigmorning
| 2023-08-14T05:17:24Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T05:17:16Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0027
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0027
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 8.2151
- Validation Loss: 0.5614
- Validation Accuracy: 0.0771
- Validation Wermet: 7.1972
- Epoch: 26
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
| 0.0000 | 0.0795 | 8.2151 | 0.5614 | 0.0771 | 7.1972 | 26 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0026
|
bigmorning
| 2023-08-14T05:13:16Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T05:13:08Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0026
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0026
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0001
- Train Accuracy: 0.0795
- Train Wermet: 8.1494
- Validation Loss: 0.5589
- Validation Accuracy: 0.0771
- Validation Wermet: 7.1609
- Epoch: 25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
| 0.0001 | 0.0795 | 8.1494 | 0.5589 | 0.0771 | 7.1609 | 25 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0025
|
bigmorning
| 2023-08-14T05:09:05Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T05:08:57Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0025
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0025
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0002
- Train Accuracy: 0.0795
- Train Wermet: 8.1738
- Validation Loss: 0.5604
- Validation Accuracy: 0.0771
- Validation Wermet: 7.1617
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
| 0.0004 | 0.0795 | 8.0507 | 0.5619 | 0.0770 | 7.0678 | 22 |
| 0.0003 | 0.0795 | 8.0534 | 0.5593 | 0.0771 | 7.0433 | 23 |
| 0.0002 | 0.0795 | 8.1738 | 0.5604 | 0.0771 | 7.1617 | 24 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_QA_Model_2.01_DistilBert_NO_UNK_DATASET_FOR_COMPARISON
|
chriskim2273
| 2023-08-14T05:05:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-14T04:34:38Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: IOTNation_QA_Model_2.01_DistilBert_NO_UNK_DATASET_FOR_COMPARISON
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_QA_Model_2.01_DistilBert_NO_UNK_DATASET_FOR_COMPARISON
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0022
|
bigmorning
| 2023-08-14T04:56:26Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T04:56:08Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0022
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0022
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0013
- Train Accuracy: 0.0795
- Train Wermet: 8.2537
- Validation Loss: 0.5574
- Validation Accuracy: 0.0770
- Validation Wermet: 6.7708
- Epoch: 21
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
| 0.0003 | 0.0795 | 7.6709 | 0.6610 | 0.0762 | 7.0119 | 19 |
| 0.0115 | 0.0793 | 8.3288 | 0.5580 | 0.0769 | 7.1457 | 20 |
| 0.0013 | 0.0795 | 8.2537 | 0.5574 | 0.0770 | 6.7708 | 21 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
potatomine/keras-dummy-sequential-demo-test
|
potatomine
| 2023-08-14T04:52:51Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-08-14T04:46:38Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
|
muhammadravi251001/fine-tuned-KoreanNLI-KorNLI-with-xlm-roberta-large
|
muhammadravi251001
| 2023-08-14T04:47:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-11T06:40:57Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine-tuned-KoreanNLI-KorNLI-with-xlm-roberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-KoreanNLI-KorNLI-with-xlm-roberta-large
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4428
- Accuracy: 0.8439
- F1: 0.8445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4595 | 0.5 | 3654 | 0.4630 | 0.8064 | 0.8089 |
| 0.4138 | 1.0 | 7308 | 0.4497 | 0.8146 | 0.8165 |
| 0.3748 | 1.5 | 10962 | 0.4280 | 0.8420 | 0.8422 |
| 0.3687 | 2.0 | 14616 | 0.4161 | 0.8363 | 0.8376 |
| 0.3265 | 2.5 | 18270 | 0.4209 | 0.8459 | 0.8465 |
| 0.3392 | 3.0 | 21924 | 0.4107 | 0.8459 | 0.8453 |
| 0.2928 | 3.5 | 25578 | 0.4479 | 0.8395 | 0.8401 |
| 0.2975 | 4.0 | 29232 | 0.4428 | 0.8439 | 0.8445 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_Classification_Model_0.75_5K_AND_ORIGINAL_DATASET_BERT
|
chriskim2273
| 2023-08-14T04:44:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-13T08:25:38Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IOTNation_Classification_Model_0.75_5K_AND_ORIGINAL_DATASET_BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_Classification_Model_0.75_5K_AND_ORIGINAL_DATASET_BERT
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0178
- Accuracy: 0.9958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0019
|
bigmorning
| 2023-08-14T04:43:40Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T04:43:33Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0019
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0019
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.7427
- Validation Loss: 0.5796
- Validation Accuracy: 0.0771
- Validation Wermet: 6.8406
- Epoch: 18
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
| 0.0000 | 0.0795 | 7.7427 | 0.5796 | 0.0771 | 6.8406 | 18 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0018
|
bigmorning
| 2023-08-14T04:39:33Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T04:39:25Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0018
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0018
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.7109
- Validation Loss: 0.5784
- Validation Accuracy: 0.0771
- Validation Wermet: 6.8560
- Epoch: 17
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
| 0.0000 | 0.0795 | 7.7109 | 0.5784 | 0.0771 | 6.8560 | 17 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0017
|
bigmorning
| 2023-08-14T04:35:18Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T04:35:08Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0017
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0017
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.7355
- Validation Loss: 0.5765
- Validation Accuracy: 0.0771
- Validation Wermet: 6.8447
- Epoch: 16
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
| 0.0000 | 0.0795 | 7.7355 | 0.5765 | 0.0771 | 6.8447 | 16 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_QA_Model_2.0_DistilBert_UNK_DATASET_50_ENTRIES
|
chriskim2273
| 2023-08-14T04:32:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-14T04:21:05Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: IOTNation_QA_Model_2.0_DistilBert_UNK_DATASET_50_ENTRIES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_QA_Model_2.0_DistilBert_UNK_DATASET_50_ENTRIES
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0016
|
bigmorning
| 2023-08-14T04:30:56Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T04:30:48Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0016
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0016
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0795
- Train Wermet: 7.7277
- Validation Loss: 0.5752
- Validation Accuracy: 0.0771
- Validation Wermet: 6.8671
- Epoch: 15
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
| 0.0001 | 0.0795 | 7.7721 | 0.5726 | 0.0771 | 6.8911 | 12 |
| 0.0000 | 0.0795 | 7.8163 | 0.5721 | 0.0771 | 6.8876 | 13 |
| 0.0000 | 0.0795 | 7.7745 | 0.5741 | 0.0771 | 6.8770 | 14 |
| 0.0000 | 0.0795 | 7.7277 | 0.5752 | 0.0771 | 6.8671 | 15 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
gregorgabrovsek/SloBertAA_Top20_WithOOC_082023
|
gregorgabrovsek
| 2023-08-14T04:23:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-13T17:09:23Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SloBertAA_Top20_WithOOC_082023
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top20_WithOOC_082023
This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0247
- Accuracy: 0.8659
- F1: 0.8642
- Precision: 0.8642
- Recall: 0.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5972 | 1.0 | 23853 | 0.5451 | 0.8293 | 0.8264 | 0.8276 | 0.8293 |
| 0.4728 | 2.0 | 47706 | 0.5189 | 0.8435 | 0.8380 | 0.8458 | 0.8435 |
| 0.3736 | 3.0 | 71559 | 0.5216 | 0.8512 | 0.8499 | 0.8507 | 0.8512 |
| 0.2785 | 4.0 | 95412 | 0.6074 | 0.8526 | 0.8500 | 0.8528 | 0.8526 |
| 0.2002 | 5.0 | 119265 | 0.6906 | 0.8561 | 0.8534 | 0.8552 | 0.8561 |
| 0.1719 | 6.0 | 143118 | 0.7822 | 0.8600 | 0.8580 | 0.8588 | 0.8600 |
| 0.1337 | 7.0 | 166971 | 0.8742 | 0.8623 | 0.8607 | 0.8612 | 0.8623 |
| 0.0826 | 8.0 | 190824 | 0.9613 | 0.8627 | 0.8602 | 0.8605 | 0.8627 |
| 0.0603 | 9.0 | 214677 | 1.0092 | 0.8632 | 0.8617 | 0.8620 | 0.8632 |
| 0.0359 | 10.0 | 238530 | 1.0247 | 0.8659 | 0.8642 | 0.8642 | 0.8659 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
bigmorning/whisper_charsplit_new_round3__0012
|
bigmorning
| 2023-08-14T04:14:15Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T04:14:07Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0012
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0012
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0001
- Train Accuracy: 0.0795
- Train Wermet: 7.7540
- Validation Loss: 0.5725
- Validation Accuracy: 0.0771
- Validation Wermet: 6.9281
- Epoch: 11
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
| 0.0001 | 0.0795 | 7.7157 | 0.5681 | 0.0771 | 6.8391 | 10 |
| 0.0001 | 0.0795 | 7.7540 | 0.5725 | 0.0771 | 6.9281 | 11 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
byc3230/ko_en_translation_9
|
byc3230
| 2023-08-14T04:08:41Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-14T00:59:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: ko_en_translation_9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ko_en_translation_9
This model is a fine-tuned version of [inhee/opus-mt-ko-en-finetuned-ko-to-en5](https://huggingface.co/inhee/opus-mt-ko-en-finetuned-ko-to-en5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8852
- Bleu: 45.395
- Gen Len: 38.8647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.2529 | 1.0 | 4900 | 1.0958 | 40.5188 | 38.6684 |
| 1.0565 | 2.0 | 9800 | 0.9885 | 42.7968 | 38.6141 |
| 0.9518 | 3.0 | 14700 | 0.9370 | 43.8495 | 38.7762 |
| 0.8798 | 4.0 | 19600 | 0.9119 | 44.7342 | 38.7903 |
| 0.8401 | 5.0 | 24500 | 0.8958 | 45.2518 | 38.8909 |
| 0.8075 | 6.0 | 29400 | 0.8883 | 45.326 | 38.8503 |
| 0.7934 | 7.0 | 34300 | 0.8852 | 45.395 | 38.8647 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0010
|
bigmorning
| 2023-08-14T04:05:45Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T04:05:37Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0010
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0004
- Train Accuracy: 0.0795
- Train Wermet: 7.3807
- Validation Loss: 0.5698
- Validation Accuracy: 0.0770
- Validation Wermet: 7.0671
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
| 0.0004 | 0.0795 | 7.3807 | 0.5698 | 0.0770 | 7.0671 | 9 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0009
|
bigmorning
| 2023-08-14T04:01:32Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T04:01:25Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0009
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0009
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0010
- Train Accuracy: 0.0795
- Train Wermet: 7.5822
- Validation Loss: 0.5755
- Validation Accuracy: 0.0769
- Validation Wermet: 6.6613
- Epoch: 8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
| 0.0009 | 0.0795 | 7.2393 | 0.5816 | 0.0769 | 6.5734 | 7 |
| 0.0010 | 0.0795 | 7.5822 | 0.5755 | 0.0769 | 6.6613 | 8 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
lmdeploy/llama2-chat-13b-w4
|
lmdeploy
| 2023-08-14T04:00:23Z | 15 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-10T14:32:44Z |
---
license: llama2
pipeline_tag: text-generation
tags:
- text-generation-inference
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64ccdc322e592905f922a06e/VhwQtaklohkUXFWkjA-3M.png" width="450"/>
English | [简体中文](README_zh-CN.md)
</div>
<p align="center">
👋 join us on <a href="https://twitter.com/intern_lm" target="_blank">Twitter</a>, <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=internwx" target="_blank">WeChat</a>
</p>
# W4A16 LLM Model Deployment
LMDeploy supports LLM model inference of 4-bit weight, with the minimum requirement for NVIDIA graphics cards being sm80.
Before proceeding with the inference, please ensure that lmdeploy(>=v0.0.4) is installed.
```shell
pip install lmdeploy
```
## 4-bit LLM model Inference
You can download the pre-quantized 4-bit weight models from LMDeploy's [model zoo](https://huggingface.co/lmdeploy) and conduct inference using the following command.
Alternatively, you can quantize 16-bit weights to 4-bit weights following the ["4-bit Weight Quantization"](#4-bit-weight-quantization) section, and then perform inference as per the below instructions.
Take the 4-bit Llama-2-13B model from the model zoo as an example:
```shell
git-lfs install
git clone https://huggingface.co/lmdeploy/llama2-chat-13b-w4
```
As demonstrated in the command below, first convert the model's layout using `turbomind.deploy`, and then you can interact with the AI assistant in the terminal
```shell
## Convert the model's layout and store it in the default path, ./workspace.
python3 -m lmdeploy.serve.turbomind.deploy \
--model-name llama2 \
--model-path ./llama2-chat-13b-w4 \
--model-format awq \
--group-size 128
## inference
python3 -m lmdeploy.turbomind.chat ./workspace
```
## Serve with gradio
If you wish to interact with the model via web ui, please initiate the gradio server as indicated below:
```shell
python3 -m lmdeploy.serve.turbomind ./workspace --server_name {ip_addr} ----server_port {port}
```
Subsequently, you can open the website `http://{ip_addr}:{port}` in your browser and interact with the model
## Inference Performance
We benchmarked the Llama 2 7B and 13B with 4-bit quantization on NVIDIA GeForce RTX 4090 using [profile_generation.py](https://github.com/InternLM/lmdeploy/blob/main/benchmark/profile_generation.py). And we measure the token generation throughput (tokens/s) by setting a single prompt token and generating 512 tokens. All the results are measured for single batch inference.
| model | llm-awq | mlc-llm | turbomind |
| ----------- | ------- | ------- | --------- |
| Llama 2 7B | 112.9 | 159.4 | 206.4 |
| Llama 2 13B | N/A | 90.7 | 115.8 |
```shell
python benchmark/profile_generation.py \
./workspace \
--concurrency 1 --input_seqlen 1 --output_seqlen 512
```
## 4-bit Weight Quantization
It includes two steps:
- generate quantization parameter
- quantize model according to the parameter
### Step 1: Generate Quantization Parameter
```shell
python3 -m lmdeploy.lite.apis.calibrate \
--model $HF_MODEL \
--calib_dataset 'c4' \ # Calibration dataset, supports c4, ptb, wikitext2, pileval
--calib_samples 128 \ # Number of samples in the calibration set, if memory is insufficient, you can appropriately reduce this
--calib_seqlen 2048 \ # Length of a single piece of text, if memory is insufficient, you can appropriately reduce this
--work_dir $WORK_DIR \ # Folder storing Pytorch format quantization statistics parameters and post-quantization weight
```
### Step2: Quantize Weights
LMDeploy employs AWQ algorithm for model weight quantization.
```shell
python3 -m lmdeploy.lite.apis.auto_awq \
--model $HF_MODEL \
--w_bits 4 \ # Bit number for weight quantization
--w_sym False \ # Whether to use symmetric quantization for weights
--w_group_size 128 \ # Group size for weight quantization statistics
--work_dir $WORK_DIR \ # Directory saving quantization parameters from Step 1
```
After the quantization is complete, the quantized model is saved to `$WORK_DIR`. Then you can proceed with model inference according to the instructions in the ["4-Bit Weight Model Inference"](#4-bit-llm-model-inference) section.
|
bigmorning/whisper_charsplit_new_round3__0007
|
bigmorning
| 2023-08-14T03:53:08Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T03:53:01Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0007
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0007
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0008
- Train Accuracy: 0.0795
- Train Wermet: 7.3468
- Validation Loss: 0.5734
- Validation Accuracy: 0.0769
- Validation Wermet: 6.1909
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
| 0.0012 | 0.0795 | 7.8476 | 0.5699 | 0.0769 | 6.5976 | 2 |
| 0.0010 | 0.0795 | 7.6843 | 0.5740 | 0.0769 | 6.9513 | 3 |
| 0.0014 | 0.0795 | 8.0796 | 0.5763 | 0.0768 | 7.4043 | 4 |
| 0.0019 | 0.0795 | 7.7274 | 0.5724 | 0.0769 | 6.4922 | 5 |
| 0.0008 | 0.0795 | 7.3468 | 0.5734 | 0.0769 | 6.1909 | 6 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
AltairXz/q-FrozenLake-v1-4x4-noSlippery
|
AltairXz
| 2023-08-14T03:52:22Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-14T03:52:20Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AltairXz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hw2942/bert-base-chinese-SSEC
|
hw2942
| 2023-08-14T03:38:11Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-14T03:25:44Z |
---
base_model: bert-base-chinese
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-chinese-wallstreetcn-morning-news-market-overview-SSEC-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-wallstreetcn-morning-news-market-overview-SSEC-v3
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1007
- Accuracy: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 34 | 2.2173 | 0.7188 |
| No log | 2.0 | 68 | 1.8368 | 0.7188 |
| No log | 3.0 | 102 | 2.7822 | 0.625 |
| No log | 4.0 | 136 | 2.3597 | 0.7188 |
| No log | 5.0 | 170 | 3.3032 | 0.5312 |
| No log | 6.0 | 204 | 2.9527 | 0.6562 |
| No log | 7.0 | 238 | 2.7575 | 0.6875 |
| No log | 8.0 | 272 | 2.9714 | 0.6875 |
| No log | 9.0 | 306 | 3.0941 | 0.6875 |
| No log | 10.0 | 340 | 3.1007 | 0.6875 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_round3__0002
|
bigmorning
| 2023-08-14T03:32:20Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_charsplit_new_round2__0061",
"base_model:finetune:bigmorning/whisper_charsplit_new_round2__0061",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-14T03:32:13Z |
---
license: apache-2.0
base_model: bigmorning/whisper_charsplit_new_round2__0061
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_round3__0002
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_round3__0002
This model is a fine-tuned version of [bigmorning/whisper_charsplit_new_round2__0061](https://huggingface.co/bigmorning/whisper_charsplit_new_round2__0061) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0015
- Train Accuracy: 0.0795
- Train Wermet: 8.4221
- Validation Loss: 0.5756
- Validation Accuracy: 0.0769
- Validation Wermet: 7.1487
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.0009 | 0.0795 | 7.9492 | 0.5730 | 0.0769 | 7.2856 | 0 |
| 0.0015 | 0.0795 | 8.4221 | 0.5756 | 0.0769 | 7.1487 | 1 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
huanhkv/llama-2-7b-instruction-tuning_full
|
huanhkv
| 2023-08-14T03:10:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-13T23:39:43Z |
# How it works
Base model is NousResearch/Llama-2-7b-chat-hf
# How to use
```python
import torch
import textwrap
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" if torch.cuda.is_available() else "cpu"
EVAL_PROMPTS = [
"Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây.\n\n### Instruction: Messi đã đạt bao nhiêu quả bóng vàng? \n\n### Response: ",
"Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây.\n\n### Instruction: Thủ đô nào đông dân nhất châu Á? \n\n### Response: ",
"Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây.\n\n### Instruction: Quốc gia nào có đường biển dài nhất? \n\n### Response: ",
]
def generate_eval(model: AutoModelForCausalLM, tokenizer: AutoTokenizer):
print("Starting Evaluation...")
model = model.to(device)
model.eval()
for eval_prompt in EVAL_PROMPTS:
batch = tokenizer(eval_prompt, return_tensors="pt").to(device)
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=128)
print("\n\n", textwrap.fill(tokenizer.decode(output_tokens[0], skip_special_tokens=False)))
print("*"*100)
# Load the Lora model
model = AutoModelForCausalLM.from_pretrained("huanhkv/llama-2-7b-instruction-tuning_full")
tokenizer = AutoTokenizer.from_pretrained("huanhkv/llama-2-7b-instruction-tuning_full")
generate_eval(model, tokenizer)
```
The output should be:
```
<s> Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây.
### Instruction: Messi đã đạt bao nhiêu quả bóng vàng?
### Response: 7</s>
******************************
<s> Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây.
### Instruction: Thủ đô nào đông dân nhất châu Á?
### Response: Đông Đông Dương là thủ đô nhất châu Á về dân số.</s>
******************************
<s> Hãy viết một phản hồi thích hợp cho chỉ dẫn dưới đây.
### Instruction: Quốc gia nào có đường biển dài nhất?
### Response: Đường biển dài nhất trên thế giới là đường biển Ấn Độ Dương, dài khoảng 65.000 km.</s>
```
|
okxooxoo/donut-base-sroie
|
okxooxoo
| 2023-08-14T03:04:37Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-13T06:51:45Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.3
- Tokenizers 0.11.0
|
wangjin2000/git-base-finetune-Aug142023_03
|
wangjin2000
| 2023-08-14T03:01:21Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"git",
"image-text-to-text",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-08-14T01:38:50Z |
---
pipeline_tag: image-to-text
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
microsoft/git_base
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mrkusypl/Miroslaw-Stabinski
|
mrkusypl
| 2023-08-14T02:53:11Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-08-07T20:26:39Z |
---
language:
- pl
---
<center>
<img src="https://cdn.discordapp.com/attachments/1138209218969731183/1138209219384979597/240774873_122099140169811_8790049852222389754_n.jpg"></img>
<h1>Mirosław Stabiński (RVC v2) (Mangio Crepe 64) (1125 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Mirosław Stabiński <br/>
**Dataset:** 00:21:47 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1138209218969731183/1138209243686776903/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1138209218969731183/1138211956268998697/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/Miroslaw-Stabinski/resolve/main/Miros%C5%82aw%20Stabi%C5%84ski%20%5B1125%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center>
|
AARon99/MedText-llama-2-70b-Guanaco-QLoRA-fp16
|
AARon99
| 2023-08-14T02:49:27Z | 0 | 3 | null |
[
"license:other",
"region:us"
] | null | 2023-07-28T18:18:28Z |
---
license: other
---
I am learning how to make LoRAs with Oobabooga, these data are for experimental and research purposes.
This is a Medical Knowledge LoRA made for use with this model: llama-2-70b-Guanaco-QLoRA-fp16
https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-fp16) (quantized and merged models coming soon).
---
Model lineage:
https://huggingface.co/timdettmers/guanaco-65b -> https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora -> https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-fp16
---
Training Data and Formatting:
Training data are garnered from: https://huggingface.co/datasets/BI55/MedText
These training data were then formatted for use with the "Raw text file" training option in the Oobabooga text-generation-webui:
(https://github.com/oobabooga/text-generation-webui)
Training parameters are in the training_parameters.json file and there is a screenshot of the UI with the correct settings.
---
Examples and Additional Information:
Check out the png files in the repo for an example conversation as well as other pieces of information that beginners might find useful.

---
Current/Future Work:
1. Finish training with "Structed Dataset" I have a .json file with a structured dataset for the Guanaco model, but it takes significantly longer to process in the Oobabooga webui.
2. Train the vanilla LlamaV2 70B model, with Raw and Structured data.
3. Merge LoRA with LLM so you don't need to load the LoRA seperately.
---
Use at own risk, I am using this repo to both organize my results and potentially help others with LoRA training.
It is not the intention of this repo to purport medical information.
Refer to the reference material for licensing guidance. I don't care how you use this LoRA, but you should reference the licensing requirments of the reference material if you indend on using this for anything other than personal use.
I want to thank and acknowledge the hard work of the people involved in the creation of the dataset and Guanaco models/LoRA! Your work is greatly appreciated <3
|
Thamer/wav2vec-fine_tuned-speech_command2
|
Thamer
| 2023-08-14T02:27:06Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:speech_commands",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-13T19:01:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- speech_commands
metrics:
- accuracy
model-index:
- name: wav2vec-fine_tuned-speech_command2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-fine_tuned-speech_command2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the speech_commands dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1040
- Accuracy: 0.9735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3874 | 1.0 | 50 | 0.9633 | 0.9229 |
| 0.5144 | 2.0 | 100 | 0.4398 | 0.9138 |
| 0.3538 | 3.0 | 150 | 0.1688 | 0.9651 |
| 0.2956 | 4.0 | 200 | 0.1622 | 0.9623 |
| 0.2662 | 5.0 | 250 | 0.1425 | 0.9665 |
| 0.2122 | 6.0 | 300 | 0.1301 | 0.9682 |
| 0.1948 | 7.0 | 350 | 0.1232 | 0.9693 |
| 0.1837 | 8.0 | 400 | 0.1116 | 0.9734 |
| 0.1631 | 9.0 | 450 | 0.1041 | 0.9734 |
| 0.1441 | 10.0 | 500 | 0.1040 | 0.9735 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Evan-Lin/Bart-large-abs-amazon-entailment
|
Evan-Lin
| 2023-08-14T01:55:53Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-14T01:43:21Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpetrgbosh/Evan-Lin/Bart-large-abs-amazon-entailment")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpetrgbosh/Evan-Lin/Bart-large-abs-amazon-entailment")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpetrgbosh/Evan-Lin/Bart-large-abs-amazon-entailment")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Evan-Lin/Bart-large-abs-amazon-entailment2-rouge
|
Evan-Lin
| 2023-08-14T01:33:15Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-14T01:15:41Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpghor1ugg/Evan-Lin/Bart-large-abs-amazon-entailment2-rouge")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpghor1ugg/Evan-Lin/Bart-large-abs-amazon-entailment2-rouge")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpghor1ugg/Evan-Lin/Bart-large-abs-amazon-entailment2-rouge")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
csukuangfj/sherpa-onnx-streaming-paraformer-bilingual-zh-en
|
csukuangfj
| 2023-08-14T01:27:14Z | 0 | 1 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-08-14T01:25:23Z |
---
license: apache-2.0
---
`*.onnx` models are converted from
https://www.modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/summary
See also https://huggingface.co/csukuangfj/streaming-paraformer-zh
Note: We have used
https://huggingface.co/csukuangfj/streaming-paraformer-zh/blob/main/add-model-metadata.py
to add meta data to `model.onnx` and renamed it to `encoder.onnx`.
|
FYP19/my_model-2
|
FYP19
| 2023-08-14T01:01:33Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-12T14:57:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_model-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model-2
This model is a fine-tuned version of [FYP19/t5-small-finetuned-wikisql](https://huggingface.co/FYP19/t5-small-finetuned-wikisql) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0847
- Rouge2 Precision: 0.8004
- Rouge2 Recall: 0.4506
- Rouge2 Fmeasure: 0.5172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.0414 | 1.0 | 1832 | 0.0620 | 0.7123 | 0.3937 | 0.4486 |
| 0.0255 | 2.0 | 3664 | 0.0669 | 0.7301 | 0.4035 | 0.4621 |
| 0.0217 | 3.0 | 5496 | 0.0697 | 0.7895 | 0.4469 | 0.511 |
| 0.0161 | 4.0 | 7328 | 0.0712 | 0.7569 | 0.4217 | 0.4827 |
| 0.0115 | 5.0 | 9160 | 0.0763 | 0.7778 | 0.435 | 0.4992 |
| 0.009 | 6.0 | 10992 | 0.0785 | 0.7751 | 0.4306 | 0.4945 |
| 0.0057 | 7.0 | 12824 | 0.0825 | 0.7755 | 0.4326 | 0.4963 |
| 0.0045 | 8.0 | 14656 | 0.0847 | 0.8004 | 0.4506 | 0.5172 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
brunoboat/ppo-LunarLander-8
|
brunoboat
| 2023-08-14T00:42:45Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-14T00:11:54Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -145.84 +/- 70.15
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'brunoboat/ppo-LunarLander-8'
'batch_size': 512
'minibatch_size': 128}
```
|
MichaelYxWang/Taxi-v3
|
MichaelYxWang
| 2023-08-14T00:26:31Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-14T00:26:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MichaelYxWang/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
C-Lo/balanced_gendered-dataset
|
C-Lo
| 2023-08-14T00:21:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-14T00:18:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: balanced_gendered-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# balanced_gendered-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
MichaelYxWang/q-FrozenLake-v1-4x4-noSlippery
|
MichaelYxWang
| 2023-08-14T00:21:36Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-14T00:21:34Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="MichaelYxWang/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
HexHands/finishABOUTME
|
HexHands
| 2023-08-14T00:04:07Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T01:56:24Z |
---
license: cc-by-4.0
language: en
tags:
- text-generation
pipeline_tag: text-generation
widget:
- text: "My name is "
- text: "I believe that I need to be more friendly."
- text: "Follow @griffpatch!"
- text: "How will my projects get better?"
---
# finishABOUTME
finishABOUTME is a torch model which was trained on 2000 Scratch About Me sections.
It is meant to finish any About Me section!
# Example
Input: This Scratch Studio will reach 100 followers in a few days!\n
Output: This Scratch Studio will reach 100 followers in a few days!\nThis studio here so much slower. Sorry for the inconveni have all, but we get every monday feel free to add projects about duckling Pond!\n\nThe Duckling Pond
|
ckandemir/ML-Agents-Pyramids
|
ckandemir
| 2023-08-13T23:58:35Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-13T23:58:32Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ckandemir/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nihal-tw/finetuned-f7b
|
nihal-tw
| 2023-08-13T23:49:41Z | 31 | 0 |
peft
|
[
"peft",
"medical",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-13T23:11:52Z |
---
library_name: peft
license: apache-2.0
pipeline_tag: text-generation
tags:
- medical
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
lockiultra/rating_model
|
lockiultra
| 2023-08-13T23:48:40Z | 67 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-13T23:45:04Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: rating_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rating_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
johnmarx/lora-trained-xl
|
johnmarx
| 2023-08-13T23:39:43Z | 0 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-13T22:56:50Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: znoelleb
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - johnmarx/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on znoelleb using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
AOLCDROM/WAV2LIP-HQ-Updated-MIRROR
|
AOLCDROM
| 2023-08-13T23:22:41Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-08-13T23:14:06Z |
This is a mirror of the weights for the Wav2Lip-HQ-Updated repo, because the linked files on Google Drive appear to be incorrect or down.
License follows oriignal authors intent.
---
license: other
---
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e9_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T23:20:22Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T23:20:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e8_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T23:12:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T23:12:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
FireHead90544/RudraRVCs
|
FireHead90544
| 2023-08-13T23:08:19Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-08-09T15:39:45Z |
---
license: openrail
---
# RVCs - Some of the voices I trained
**Seiya Ryuuguuin - The Hero Is Overpowered But Overly Cautious (JP VA: Yuuichirou Umehara)**
Currently, these ones are available:
- ## [Seiya Ryuuguuin RVC v2 Mangio-Crepe (340 Epochs, 5440 Steps)](https://huggingface.co/FireHead90544/RudraRVCs/resolve/main/SeiyaRyuuguuinRVC.zip)
- ## [Seiya Ryuuguuin RVC v2 RMVPE (300 Epochs, 6300 Steps)](https://huggingface.co/FireHead90544/RudraRVCs/resolve/main/SeiyaRyuuguuinV2.zip) # This seems to perform better
- ## [Seiya Ryuuguuin Max RVC v2 RMVPE (400 Epochs, 8400 Steps)](https://huggingface.co/FireHead90544/RudraRVCs/resolve/main/SeiyaRyuuguuinMax.zip) # Probably the best one
## Samples
- ### Mangio-Crepe
- [NEFFEX - Cold](https://cdn.discordapp.com/attachments/1090766429785178142/1138861234561753249/Seiya_Ryuuguuin_-_Cold.mp3)
- [Kenshi Yonezu - Kick Back](https://cdn.discordapp.com/attachments/1090766429785178142/1138861234951819264/Seiya_Ryuuguuin_-_Kick_Back.mp3)
- ### RMVPE
- [YOASOBI - Running Into The Night](https://cdn.discordapp.com/attachments/549264174753120267/1138908849076703332/Seiya_Ryuuguuin_-_Racing_Into_The_Night.mp3)
- [Tk From Ling Tosite Sigure - Unravel](https://cdn.discordapp.com/attachments/549264174753120267/1138908849789734972/Seiya_Ryuuguuin_-_Unravel.mp3)
- [Jin Hashimoto - Stand Proud](https://cdn.discordapp.com/attachments/549264174753120267/1138908849424834741/Seiya_Ryuuguuin_-_Stand_Proud.mp3)
- [KSUKE - Contradiction](https://cdn.discordapp.com/attachments/549264174753120267/1138908848749551636/Seiya_Ryuuguuin_-_Contradiction.mp3)
- [Smash Mouth - All Star](https://cdn.discordapp.com/attachments/549264174753120267/1138908850137858189/Seiya_Ryuuguuin_-_All_Star.mp3)
- [OxT - Clattanoia](https://cdn.discordapp.com/attachments/549264174753120267/1138908850469216327/Seiya_Ryuuguuin_-_Clattanoia.mp3)
- <video controls width="640" height="360">
<source src="https://cdn.discordapp.com/attachments/1138965403658362910/1139679982717767870/Cupid.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
- <video controls width="640" height="360">
<source src="https://cdn.discordapp.com/attachments/1138965403658362910/1140419271772606474/Yoru_Ni_Kakeru.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e7_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T23:08:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T23:08:14Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e6_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T23:00:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T23:00:44Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e6_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T22:56:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:56:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e5_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T22:53:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:53:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e4_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T22:45:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:45:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
platzi/platzi-vit-model-andres-grimaldos
|
platzi
| 2023-08-13T22:42:06Z | 215 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-13T00:51:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
widget:
- src: https://huggingface.co/platzi/platzi-vit-model-andres-grimaldos/resolve/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/platzi/platzi-vit-model-andres-grimaldos/resolve/main/been-rust.jpeg
example_title: Bean Rust
model-index:
- name: platzi-vit-model-andres-grimaldos
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9924812030075187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-andres-grimaldos
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0166
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1458 | 3.85 | 500 | 0.0166 | 0.9925 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
gregorgabrovsek/SloBertAA_Top5_WithOOC_082023
|
gregorgabrovsek
| 2023-08-13T22:41:27Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-13T17:09:23Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SloBertAA_Top5_WithOOC_082023
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SloBertAA_Top5_WithOOC_082023
This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7870
- Accuracy: 0.9013
- F1: 0.9010
- Precision: 0.9013
- Recall: 0.9013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.381 | 1.0 | 10508 | 0.3981 | 0.8665 | 0.8636 | 0.8666 | 0.8665 |
| 0.2912 | 2.0 | 21016 | 0.3497 | 0.8855 | 0.8854 | 0.8868 | 0.8855 |
| 0.2352 | 3.0 | 31524 | 0.3778 | 0.8906 | 0.8901 | 0.8908 | 0.8906 |
| 0.1875 | 4.0 | 42032 | 0.4656 | 0.8903 | 0.8902 | 0.8920 | 0.8903 |
| 0.1447 | 5.0 | 52540 | 0.5620 | 0.8944 | 0.8949 | 0.8969 | 0.8944 |
| 0.0938 | 6.0 | 63048 | 0.6150 | 0.8975 | 0.8975 | 0.8980 | 0.8975 |
| 0.0685 | 7.0 | 73556 | 0.7084 | 0.8950 | 0.8945 | 0.8953 | 0.8950 |
| 0.0449 | 8.0 | 84064 | 0.7499 | 0.8997 | 0.8992 | 0.8995 | 0.8997 |
| 0.0267 | 9.0 | 94572 | 0.7734 | 0.8987 | 0.8983 | 0.8990 | 0.8987 |
| 0.021 | 10.0 | 105080 | 0.7870 | 0.9013 | 0.9010 | 0.9013 | 0.9013 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.8.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e4_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T22:40:30Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:40:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e3_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T22:38:16Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:38:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e3_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T22:32:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:32:30Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
redstonehero/meinahentai_v4
|
redstonehero
| 2023-08-13T22:29:04Z | 29 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T20:13:29Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e1_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T22:23:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:23:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
ckandemir/ppo-SnowballTarget
|
ckandemir
| 2023-08-13T22:16:59Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-13T22:16:57Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ckandemir/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e-1_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T22:08:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:08:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_p_tuning_500_10_3000_8_e-1_s108_v4_l5_v50
|
KingKazma
| 2023-08-13T22:00:43Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T22:00:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e9_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:55:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:55:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e8_s55555_v4_l4_v100
|
KingKazma
| 2023-08-13T21:48:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:48:22Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e9_s55555_v4_l5_v50
|
KingKazma
| 2023-08-13T21:47:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T21:47:38Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Wanaldino/lora-trained-xl-colab
|
Wanaldino
| 2023-08-13T21:43:30Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-13T19:54:46Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of a women
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Wanaldino/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of a women using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
redstonehero/perfectworld_v5
|
redstonehero
| 2023-08-13T21:42:07Z | 30 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T20:13:35Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.