modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 12:33:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 12:32:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ILKT/2024-06-24_22-31-18_epoch_68
|
ILKT
| 2024-06-28T18:20:15Z | 141 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T18:07:17Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_68
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.717693836978132
- type: f1
value: 21.8718761030048
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.910000000000004
- type: ap
value: 15.725742132380047
- type: f1
value: 47.07555207349068
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 11.02189104267596
- type: v_measure_std
value: 2.076618095290337
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.032952252858106
- type: f1
value: 25.792242977543527
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.022626660108212
- type: f1
value: 25.33504563897343
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.44250168123739
- type: f1
value: 34.71502961902022
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.32562715199213
- type: f1
value: 35.25444966807039
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 61.453808282652766
- type: ap
value: 71.68193384089211
- type: f1
value: 57.33297115047484
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.9210116722259
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.38778773867662
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 49.45983379501385
- type: f1
value: 50.57704677799859
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 20.263157894736842
- type: f1
value: 17.770873756468234
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
CreitinGameplays/mistral-7b-v0.1-chat-test
|
CreitinGameplays
| 2024-06-28T18:18:30Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:CreitinGameplays/merged-data-v2",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T15:11:00Z |
---
license: mit
datasets:
- CreitinGameplays/merged-data-v2
base_model: mistralai/Mistral-7B-v0.1
---
|
ILKT/2024-06-24_22-31-18_epoch_67
|
ILKT
| 2024-06-28T18:14:57Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T17:48:37Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_67
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 24.622266401590455
- type: f1
value: 22.936267682156487
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 53.48
- type: ap
value: 15.322095521539064
- type: f1
value: 45.49225512083147
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 9.363928383066206
- type: v_measure_std
value: 1.3367977820048715
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.54001344989913
- type: f1
value: 23.96832609186341
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.015740285292676
- type: f1
value: 23.212345772348385
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.862138533960994
- type: f1
value: 32.8318592868999
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.63354648303001
- type: f1
value: 33.231436557685505
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 65.54590211410367
- type: ap
value: 74.21876513105504
- type: f1
value: 62.16874555498553
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.760616638633856
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.24926171089566
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 49.279778393351805
- type: f1
value: 49.51142756516184
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 18.157894736842103
- type: f1
value: 15.771804883173445
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf
|
RichardErkhov
| 2024-06-28T18:04:15Z | 79 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-28T15:24:54Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OrpoLlama-3-8B-memorize-translate - GGUF
- Model creator: https://huggingface.co/ItchyChin/
- Original model: https://huggingface.co/ItchyChin/OrpoLlama-3-8B-memorize-translate/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OrpoLlama-3-8B-memorize-translate.Q2_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q2_K.gguf) | Q2_K | 2.96GB |
| [OrpoLlama-3-8B-memorize-translate.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [OrpoLlama-3-8B-memorize-translate.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [OrpoLlama-3-8B-memorize-translate.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [OrpoLlama-3-8B-memorize-translate.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [OrpoLlama-3-8B-memorize-translate.Q3_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q3_K.gguf) | Q3_K | 3.74GB |
| [OrpoLlama-3-8B-memorize-translate.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [OrpoLlama-3-8B-memorize-translate.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [OrpoLlama-3-8B-memorize-translate.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [OrpoLlama-3-8B-memorize-translate.Q4_0.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_0.gguf) | Q4_0 | 4.34GB |
| [OrpoLlama-3-8B-memorize-translate.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [OrpoLlama-3-8B-memorize-translate.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [OrpoLlama-3-8B-memorize-translate.Q4_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_K.gguf) | Q4_K | 4.58GB |
| [OrpoLlama-3-8B-memorize-translate.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [OrpoLlama-3-8B-memorize-translate.Q4_1.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_1.gguf) | Q4_1 | 4.78GB |
| [OrpoLlama-3-8B-memorize-translate.Q5_0.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_0.gguf) | Q5_0 | 5.21GB |
| [OrpoLlama-3-8B-memorize-translate.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [OrpoLlama-3-8B-memorize-translate.Q5_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_K.gguf) | Q5_K | 5.34GB |
| [OrpoLlama-3-8B-memorize-translate.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [OrpoLlama-3-8B-memorize-translate.Q5_1.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_1.gguf) | Q5_1 | 5.65GB |
| [OrpoLlama-3-8B-memorize-translate.Q6_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q6_K.gguf) | Q6_K | 6.14GB |
| [OrpoLlama-3-8B-memorize-translate.Q8_0.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ILKT/2024-06-24_22-31-18_epoch_64
|
ILKT
| 2024-06-28T18:03:18Z | 141 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T16:50:36Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_64
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.031809145129227
- type: f1
value: 21.057805091218334
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.749999999999986
- type: ap
value: 14.966302623752831
- type: f1
value: 45.9572961131143
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 9.193257287011564
- type: v_measure_std
value: 1.3490920029411124
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 25.295897780766648
- type: f1
value: 22.70370592035699
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 25.302508607968516
- type: f1
value: 22.20934032431153
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 31.418964357767322
- type: f1
value: 29.564934972848455
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 31.24938514510575
- type: f1
value: 29.831266979197295
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 61.934549666956265
- type: ap
value: 72.59210383383544
- type: f1
value: 58.69042699225203
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.946668247493655
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.51135720828322
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 48.10249307479223
- type: f1
value: 49.30092885238284
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 22.51012145748988
- type: f1
value: 19.6361344035574
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF
|
mradermacher
| 2024-06-28T17:57:23Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"tr",
"base_model:Trendyol/Trendyol-LLM-7b-chat-v1.8",
"base_model:quantized:Trendyol/Trendyol-LLM-7b-chat-v1.8",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-28T17:30:41Z |
---
base_model: Trendyol/Trendyol-LLM-7b-chat-v1.8
language:
- tr
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.8
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q2_K.gguf) | Q2_K | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.IQ3_XS.gguf) | IQ3_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q6_K.gguf) | Q6_K | 6.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v1.8-GGUF/resolve/main/Trendyol-LLM-7b-chat-v1.8.f16.gguf) | f16 | 14.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
samvelkoch/masked-fat-mamba
|
samvelkoch
| 2024-06-28T17:57:13Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-28T17:53:09Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [h2oai/h2ogpt-4096-llama2-7b](https://huggingface.co/h2oai/h2ogpt-4096-llama2-7b)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.2
```
Also make sure you are providing your huggingface token if the model is lying in a private repo.
- You can login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
You will also need to download the classification head, either manually, or by running the following code:
```python
from huggingface_hub import hf_hub_download
model_name = "samvelkoch/masked-fat-mamba" # either local folder or huggingface model name
hf_hub_download(repo_id=model_name, filename="classification_head.pth", local_dir="./")
```
You can make classification predictions by following the example below:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "samvelkoch/masked-fat-mamba" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "How are you?"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
).cuda().eval()
head_weights = torch.load("classification_head.pth", map_location="cuda")
# settings can be arbitrary here as we overwrite with saved weights
head = torch.nn.Linear(1, 1, bias=False).to("cuda")
head.weight.data = head_weights
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
out = model(**inputs).logits
logits = head(out[:,-1])
print(logits)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
rainjay/gemma-2-27b-it-4bit
|
rainjay
| 2024-06-28T17:51:44Z | 22 | 3 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-28T17:20:50Z |
---
library_name: transformers
license: gemma
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
---
# Fork from google/gemma-2-27b-it
## 4-bit Quantization
```python
nf4_config = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4")
```
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-27b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
mlx-community/Hercules-5.0-Qwen2-1.5B-4bits
|
mlx-community
| 2024-06-28T17:49:51Z | 10 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"en",
"dataset:Locutusque/hercules-v5.0",
"license:apache-2.0",
"region:us"
] | null | 2024-06-28T17:39:33Z |
---
language:
- en
license: apache-2.0
tags:
- mlx
datasets:
- Locutusque/hercules-v5.0
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
min_p: 0.1
max_new_tokens: 250
repetition_penalty: 1.1
---
# mlx-community/Hercules-5.0-Qwen2-1.5B-4bits
The Model [mlx-community/Hercules-5.0-Qwen2-1.5B-4bits](https://huggingface.co/mlx-community/Hercules-5.0-Qwen2-1.5B-4bits) was converted to MLX format from [M4-ai/Hercules-5.0-Qwen2-1.5B](https://huggingface.co/M4-ai/Hercules-5.0-Qwen2-1.5B) using mlx-lm version **0.14.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Hercules-5.0-Qwen2-1.5B-4bits")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
mradermacher/DeepSeekMath-RL-Step-DPO-GGUF
|
mradermacher
| 2024-06-28T17:48:43Z | 370 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:xinlai/DeepSeekMath-RL-Step-DPO",
"base_model:quantized:xinlai/DeepSeekMath-RL-Step-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-28T16:18:22Z |
---
base_model: xinlai/DeepSeekMath-RL-Step-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/xinlai/DeepSeekMath-RL-Step-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-RL-Step-DPO-GGUF/resolve/main/DeepSeekMath-RL-Step-DPO.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ILKT/2024-06-24_22-31-18_epoch_61
|
ILKT
| 2024-06-28T17:47:17Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T15:52:44Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_61
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.932405566600398
- type: f1
value: 20.94902529179322
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 54.33
- type: ap
value: 15.47199582631999
- type: f1
value: 45.997891561312656
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 9.084943103309566
- type: v_measure_std
value: 1.458336761181277
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 25.060524546065903
- type: f1
value: 22.58815267353224
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.727004426955233
- type: f1
value: 22.280090733703737
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.7572293207801
- type: f1
value: 31.213486455980178
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.508607968519435
- type: f1
value: 31.604345690790854
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 61.24529394729221
- type: ap
value: 71.93372173407076
- type: f1
value: 57.791653249991185
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.0473526698752
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.62467732864379
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 46.8005540166205
- type: f1
value: 48.70316734098828
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 24.21052631578947
- type: f1
value: 19.523345189352405
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
yangzhao02/llama3-8b-base-dpo
|
yangzhao02
| 2024-06-28T17:43:47Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T13:17:00Z |
---
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: llama3-8b-base-dpo-120
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhaoyang1/huggingface/runs/fbpwm86a)
# llama3-8b-base-dpo-120
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4893
- Rewards/chosen: -0.5883
- Rewards/rejected: -1.3409
- Rewards/accuracies: 0.6905
- Rewards/margins: 0.7525
- Logps/rejected: -293.5897
- Logps/chosen: -331.5697
- Logits/rejected: 0.4485
- Logits/chosen: 0.2338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 24
- gradient_accumulation_steps: 5
- total_train_batch_size: 120
- total_eval_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.467 | 0.4906 | 250 | 0.4987 | -0.6034 | -1.2547 | 0.7262 | 0.6512 | -291.8655 | -331.8713 | 0.4838 | 0.2542 |
| 0.4536 | 0.9812 | 500 | 0.4893 | -0.5883 | -1.3409 | 0.6905 | 0.7525 | -293.5897 | -331.5697 | 0.4485 | 0.2338 |
### Framework versions
- Transformers 4.42.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ShauryaNova/autotrain-rp16o-pxwa0
|
ShauryaNova
| 2024-06-28T17:43:40Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-28T17:15:50Z |
---
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- autotrain
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging face auto train'
- 'search_query: i love autotrain'
pipeline_tag: sentence-similarity
---
# Model Trained Using AutoTrain
- Problem type: Sentence Transformers
## Validation Metrics
loss: 0.056603044271469116
cosine_accuracy: 1.0
dot_accuracy: 0.0
manhattan_accuracy: 1.0
euclidean_accuracy: 1.0
max_accuracy: 1.0
runtime: 43.9603
samples_per_second: 13.194
steps_per_second: 0.842
: 3.0
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the Hugging Face Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'search_query: autotrain',
'search_query: auto train',
'search_query: i love autotrain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
```
|
ILKT/2024-06-24_22-31-18_epoch_60
|
ILKT
| 2024-06-28T17:42:00Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T15:33:26Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_60
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.86679920477137
- type: f1
value: 21.882489806075938
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 54.96
- type: ap
value: 16.367178584288883
- type: f1
value: 47.185167794463176
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 10.929651745558191
- type: v_measure_std
value: 1.4613173779872772
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.828513786146605
- type: f1
value: 23.50773383494496
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.9877029021151
- type: f1
value: 22.925341846598787
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.48150638870209
- type: f1
value: 31.422287777752146
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.242990654205606
- type: f1
value: 31.718443403022135
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 62.87576020851434
- type: ap
value: 73.15080832218128
- type: f1
value: 59.552044859037444
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.406650287523476
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.85212838167898
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 50.74792243767312
- type: f1
value: 51.79370776767938
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 21.72064777327935
- type: f1
value: 17.209117243594445
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-24_22-31-18_epoch_59
|
ILKT
| 2024-06-28T17:40:49Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T15:14:19Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_59
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.290258449304176
- type: f1
value: 21.509087845399694
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 56.05
- type: ap
value: 15.78398498218104
- type: f1
value: 47.397988042921675
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 7.320766934631824
- type: v_measure_std
value: 1.1646057607143652
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.869535978480158
- type: f1
value: 25.215177623598578
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.468273487456962
- type: f1
value: 24.373904499019712
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 33.36919973100202
- type: f1
value: 32.06093037046196
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 33.413674372848
- type: f1
value: 32.70535475843592
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 63.866203301476986
- type: ap
value: 73.76128512968913
- type: f1
value: 61.05892164159117
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.66040751469705
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 30.77367170717621
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 50.18005540166205
- type: f1
value: 51.779903761185395
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 21.538461538461537
- type: f1
value: 18.00426524613684
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
kurosekurose/wav2vec2-base-EMOPIA
|
kurosekurose
| 2024-06-28T17:39:11Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-15T15:47:31Z |
---
base_model: facebook/wav2vec2-base
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-EMOPIA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-EMOPIA
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1691
- Accuracy: 0.6338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8716 | 1.0 | 269 | 0.9822 | 0.6197 |
| 0.8143 | 2.0 | 538 | 1.2324 | 0.5352 |
| 0.7584 | 3.0 | 807 | 1.0226 | 0.6479 |
| 0.6715 | 4.0 | 1076 | 0.9550 | 0.6620 |
| 0.6471 | 5.0 | 1345 | 1.1272 | 0.6761 |
| 0.5759 | 6.0 | 1614 | 1.2193 | 0.6761 |
| 0.4963 | 7.0 | 1883 | 1.2214 | 0.7183 |
| 0.4053 | 8.0 | 2152 | 1.3083 | 0.7465 |
| 0.3344 | 9.0 | 2421 | 1.6391 | 0.6620 |
| 0.3216 | 10.0 | 2690 | 1.7224 | 0.6479 |
| 0.2248 | 11.0 | 2959 | 1.7973 | 0.6761 |
| 0.1982 | 12.0 | 3228 | 2.0241 | 0.6479 |
| 0.1362 | 13.0 | 3497 | 1.9933 | 0.6479 |
| 0.0879 | 14.0 | 3766 | 2.0865 | 0.6479 |
| 0.0712 | 15.0 | 4035 | 2.1691 | 0.6338 |
### Framework versions
- Transformers 4.42.2
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mlx-community/TinyMistral-248M-8bits
|
mlx-community
| 2024-06-28T17:32:16Z | 30 | 1 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Skylion007/openwebtext",
"dataset:JeanKaddour/minipile",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-06-28T17:30:42Z |
---
language:
- en
license: apache-2.0
tags:
- mlx
datasets:
- Skylion007/openwebtext
- JeanKaddour/minipile
pipeline_tag: text-generation
inference:
parameters:
do_sample: true
temperature: 0.5
top_p: 0.5
top_k: 50
max_new_tokens: 250
repetition_penalty: 1.176
---
# mlx-community/TinyMistral-248M-8bits
The Model [mlx-community/TinyMistral-248M-8bits](https://huggingface.co/mlx-community/TinyMistral-248M-8bits) was converted to MLX format from [Locutusque/TinyMistral-248M](https://huggingface.co/Locutusque/TinyMistral-248M) using mlx-lm version **0.14.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/TinyMistral-248M-8bits")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
ILKT/2024-06-24_22-31-18_epoch_55
|
ILKT
| 2024-06-28T17:19:18Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T13:56:23Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_55
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.86282306163022
- type: f1
value: 21.32278358968244
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.57000000000001
- type: ap
value: 15.68012521716698
- type: f1
value: 46.76720480718772
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 11.44974273401705
- type: v_measure_std
value: 2.6336005930054065
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.622730329522525
- type: f1
value: 25.903006106915726
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.26020659124447
- type: f1
value: 25.128529286595942
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.593813046402154
- type: f1
value: 35.262887718884485
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.20757501229709
- type: f1
value: 35.17455612323974
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 62.52823631624674
- type: ap
value: 73.0495405579752
- type: f1
value: 59.16875508637578
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.58687427086566
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.17577390094799
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 49.903047091412745
- type: f1
value: 51.2490780359124
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 16.05263157894737
- type: f1
value: 14.630653114227302
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf
|
RichardErkhov
| 2024-06-28T17:18:27Z | 57 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-28T15:55:43Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-2b-ko-dev-pbc432 - GGUF
- Model creator: https://huggingface.co/gemmathon/
- Original model: https://huggingface.co/gemmathon/gemma-2b-ko-dev-pbc432/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma-2b-ko-dev-pbc432.Q2_K.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q2_K.gguf) | Q2_K | 1.08GB |
| [gemma-2b-ko-dev-pbc432.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [gemma-2b-ko-dev-pbc432.IQ3_S.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [gemma-2b-ko-dev-pbc432.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [gemma-2b-ko-dev-pbc432.IQ3_M.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [gemma-2b-ko-dev-pbc432.Q3_K.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q3_K.gguf) | Q3_K | 1.29GB |
| [gemma-2b-ko-dev-pbc432.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [gemma-2b-ko-dev-pbc432.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [gemma-2b-ko-dev-pbc432.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [gemma-2b-ko-dev-pbc432.Q4_0.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q4_0.gguf) | Q4_0 | 1.44GB |
| [gemma-2b-ko-dev-pbc432.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [gemma-2b-ko-dev-pbc432.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [gemma-2b-ko-dev-pbc432.Q4_K.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q4_K.gguf) | Q4_K | 1.52GB |
| [gemma-2b-ko-dev-pbc432.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [gemma-2b-ko-dev-pbc432.Q4_1.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q4_1.gguf) | Q4_1 | 1.56GB |
| [gemma-2b-ko-dev-pbc432.Q5_0.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q5_0.gguf) | Q5_0 | 1.68GB |
| [gemma-2b-ko-dev-pbc432.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [gemma-2b-ko-dev-pbc432.Q5_K.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q5_K.gguf) | Q5_K | 1.71GB |
| [gemma-2b-ko-dev-pbc432.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [gemma-2b-ko-dev-pbc432.Q5_1.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q5_1.gguf) | Q5_1 | 1.79GB |
| [gemma-2b-ko-dev-pbc432.Q6_K.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q6_K.gguf) | Q6_K | 1.92GB |
| [gemma-2b-ko-dev-pbc432.Q8_0.gguf](https://huggingface.co/RichardErkhov/gemmathon_-_gemma-2b-ko-dev-pbc432-gguf/blob/main/gemma-2b-ko-dev-pbc432.Q8_0.gguf) | Q8_0 | 2.49GB |
Original model description:
---
license: other
library_name: transformers
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF
|
mradermacher
| 2024-06-28T17:13:34Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:aixsatoshi/Llama-3-Elyza-Youko-moe-2x8B",
"base_model:quantized:aixsatoshi/Llama-3-Elyza-Youko-moe-2x8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-28T16:25:55Z |
---
base_model: aixsatoshi/Llama-3-Elyza-Youko-moe-2x8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/aixsatoshi/Llama-3-Elyza-Youko-moe-2x8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q2_K.gguf) | Q2_K | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.IQ3_XS.gguf) | IQ3_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q3_K_S.gguf) | Q3_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.IQ3_S.gguf) | IQ3_S | 6.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.IQ3_M.gguf) | IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q3_K_M.gguf) | Q3_K_M | 6.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q3_K_L.gguf) | Q3_K_L | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.IQ4_XS.gguf) | IQ4_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q4_K_S.gguf) | Q4_K_S | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q4_K_M.gguf) | Q4_K_M | 8.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q5_K_S.gguf) | Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q5_K_M.gguf) | Q5_K_M | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q6_K.gguf) | Q6_K | 11.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Elyza-Youko-moe-2x8B-GGUF/resolve/main/Llama-3-Elyza-Youko-moe-2x8B.Q8_0.gguf) | Q8_0 | 14.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ILKT/2024-06-24_22-31-18_epoch_52
|
ILKT
| 2024-06-28T17:07:14Z | 147 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T12:57:35Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_52
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.389662027833005
- type: f1
value: 21.43078920869762
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 53.42
- type: ap
value: 15.60707097305462
- type: f1
value: 45.74272892086198
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 8.808695576377874
- type: v_measure_std
value: 1.8978369423148689
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.194351042367188
- type: f1
value: 26.460418424351168
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.584849975405813
- type: f1
value: 25.880856873378306
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.66778749159381
- type: f1
value: 35.73019148736344
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.95671421544516
- type: f1
value: 35.40835927928668
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 65.36634810309876
- type: ap
value: 74.39369241586353
- type: f1
value: 62.05646583155308
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.84022540962507
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.210088799097576
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 47.78393351800555
- type: f1
value: 49.56616389760143
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 21.558704453441297
- type: f1
value: 17.83311916226355
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
srinivasan-sridhar28/bert-finetuned-ner
|
srinivasan-sridhar28
| 2024-06-28T17:06:46Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T14:01:58Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9345346338237726
- name: Recall
type: recall
value: 0.9513631773813531
- name: F1
type: f1
value: 0.9428738220331916
- name: Accuracy
type: accuracy
value: 0.9865632542532525
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Precision: 0.9345
- Recall: 0.9514
- F1: 0.9429
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.078 | 1.0 | 1756 | 0.0681 | 0.9034 | 0.9298 | 0.9164 | 0.9819 |
| 0.0362 | 2.0 | 3512 | 0.0692 | 0.9306 | 0.9428 | 0.9366 | 0.9850 |
| 0.0205 | 3.0 | 5268 | 0.0628 | 0.9345 | 0.9514 | 0.9429 | 0.9866 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
iremmd/thy_model_32
|
iremmd
| 2024-06-28T17:04:04Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-28T16:51:53Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** iremmd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kit-Lemonfoot/kitlemonfoot_gptsovits_models
|
Kit-Lemonfoot
| 2024-06-28T16:59:51Z | 0 | 1 | null |
[
"speech",
"gpt-sovits",
"dataset:Kit-Lemonfoot/LemonfootVoiceDatasets",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-05T02:16:49Z |
---
license: creativeml-openrail-m
tags:
- speech
- gpt-sovits
datasets:
- Kit-Lemonfoot/LemonfootVoiceDatasets
---
# Kit Lemonfoot's GPT-SoVITS Models
This repository exists to host GPT-SoVITS models made by Kit Lemonfoot.
Please credit me if you use any models in this repository in any way.
## Currently Avaliable Models:
- Vestia Zeta [Hololive ID]
- Mori Calliope [Hololive EN]
- Amelia Watson [Hololive EN]
- Shiori Novella [Hololive EN]
- Finana Ryugu [Nijisanji EN]
- Pipkin Pippa [Phase Connect]
- Tenma Maemi [Phase Connect]
- Ashelia Rinkou [Phase Connect]
- Dokibird [YouTubers]
- Mint Fantôme [YouTubers]
|
mgoin/Meta-Llama-3-8B-Instruct-pruned50-quant-ds
|
mgoin
| 2024-06-28T16:59:41Z | 25 | 0 |
transformers
|
[
"transformers",
"onnx",
"llama",
"text-generation",
"deepsparse",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-06-28T16:13:17Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
inference: false
tags:
- deepsparse
---
Llama 3 8B Instruct that has been compressed in one-shot to 50% sparsity and INT8 weights+activations using SparseGPT, SmoothQuant, and GPTQ.
Made with SparseML+DeepSparse=1.7. Install with `pip install deepsparse~=1.7 "sparseml[transformers]"~=1.7 "numpy<2"`.
Here is the script used for SparseML compression:
```python
from datasets import load_dataset
from sparseml.transformers import (
SparseAutoModelForCausalLM,
SparseAutoTokenizer,
load_dataset,
compress,
)
model = SparseAutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3-8B-Instruct", device_map="auto"
)
tokenizer = SparseAutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")
dataset = load_dataset("garage-bAInd/Open-Platypus")
def format_data(data):
instruction = tokenizer.apply_chat_template(
[{"role": "user", "content": data["instruction"]}],
tokenize=False,
add_generation_prompt=True,
)
return {"text": instruction + data["output"]}
dataset = dataset.map(format_data)
recipe = """
compression_stage:
run_type: oneshot
oneshot_modifiers:
QuantizationModifier:
ignore:
# These operations don't make sense to quantize
- LlamaRotaryEmbedding
- LlamaRMSNorm
- SiLUActivation
- QuantizableMatMul
# Skip quantizing the layers with the most sensitive activations
- model.layers.1.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.14.self_attn.q_proj
- model.layers.14.self_attn.k_proj
- model.layers.14.self_attn.v_proj
post_oneshot_calibration: true
scheme_overrides:
# Enable channelwise quantization for better accuracy
Linear:
weights:
num_bits: 8
symmetric: true
strategy: channel
# For the embeddings, only weight-quantization makes sense
Embedding:
input_activations: null
weights:
num_bits: 8
symmetric: false
SparseGPTModifier:
sparsity: 0.5
quantize: True
targets: ['re:model.layers.\\d*$']
"""
compress(
model=model,
tokenizer=tokenizer,
dataset=dataset,
recipe=recipe,
output_dir="./one-shot-checkpoint",
)
```
|
ILKT/2024-06-24_22-31-18_epoch_49
|
ILKT
| 2024-06-28T16:55:07Z | 141 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T11:59:56Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_49
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.220675944333998
- type: f1
value: 20.735651305223108
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 54.22
- type: ap
value: 15.071677708208137
- type: f1
value: 45.499279764845674
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 10.694919674511663
- type: v_measure_std
value: 1.606686134664951
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.37726967047747
- type: f1
value: 25.537951926761448
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.27151992129858
- type: f1
value: 24.27490477504838
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 33.92737054472092
- type: f1
value: 32.550588065653145
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.823413674372844
- type: f1
value: 32.00836868024851
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 67.64552562988706
- type: ap
value: 75.1700569219729
- type: f1
value: 63.86372795462002
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.26588194986782
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.751898030974658
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 48.18559556786703
- type: f1
value: 50.63657474760809
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 22.469635627530362
- type: f1
value: 19.48121978570884
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-24_22-31-18_epoch_47
|
ILKT
| 2024-06-28T16:44:28Z | 142 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T11:22:04Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_47
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 24.224652087475146
- type: f1
value: 22.197349586271862
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.11000000000001
- type: ap
value: 15.439755473946972
- type: f1
value: 46.63708808621908
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 11.483440601500842
- type: v_measure_std
value: 2.214700617841053
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.56624075319435
- type: f1
value: 27.566380470193025
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.252336448598133
- type: f1
value: 26.43327803125083
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 37.726967047747145
- type: f1
value: 35.488045013857636
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 37.40777176586326
- type: f1
value: 35.73001265939156
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 62.589052997393566
- type: ap
value: 73.31614916490435
- type: f1
value: 59.74859354718296
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.15870428407872
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.302698279530393
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 49.362880886426595
- type: f1
value: 50.96460093590364
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 24.33198380566802
- type: f1
value: 19.361423993025443
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
turkish-nlp-suite/POS-bert-128K-midsize
|
turkish-nlp-suite
| 2024-06-28T16:43:00Z | 182 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T16:42:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
1231czx/7b_dpo_iter2_7e7_bz_32_cv
|
1231czx
| 2024-06-28T16:42:52Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T16:39:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
turkish-nlp-suite/POS-bert-52K-midsize
|
turkish-nlp-suite
| 2024-06-28T16:40:33Z | 182 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T16:38:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
outy/haniwa_LoRA2
|
outy
| 2024-06-28T16:40:12Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-06-16T14:59:33Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK haniwa
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - outy/haniwa_LoRA2
<Gallery />
## Model description
These are outy/haniwa_LoRA2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use "a photo of TOK haniwa" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](outy/haniwa_LoRA2/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
nelsonjq/frame-semantic-transformer-french-small
|
nelsonjq
| 2024-06-28T16:37:34Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"framenet",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-28T16:25:11Z |
---
license: apache-2.0
language:
- fr
tags:
- framenet
---
Fine-tuned T5 small model for use as a frame semantic parser for French language in the [Frame Semantic Transformer project](https://github.com/chanind/frame-semantic-transformer). This model is trained on data from the [French FrameNet project called ASFALDA](https://sites.google.com/site/anrasfalda/).
# Usage
This is meant to be used a part of [Frame Semantic Transformer](https://github.com/chanind/frame-semantic-transformer). See that project for usage instructions.
# Tasks
This model is trained to perform 3 tasks related to semantic frame parsing:
* Identify frame trigger locations in the text
* Classify the frame given a trigger location
* Extract frame elements in the sentence
# Performance
This model is trained on the whole dataset of ASFALDA. The evaluation is pending.
# More info
This training was part of the research project on FrameNet for analyzing Corporate Social Responsability (CSR, or RSE in French) reports.
The GitHub repository of this project can be [accessed here: RSE-FrameNet](https://github.com/NelsonJQ/RSE-FrameNet).
|
turkish-nlp-suite/POS-bert-128K-small
|
turkish-nlp-suite
| 2024-06-28T16:36:59Z | 188 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T16:36:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
turkish-nlp-suite/POS-bert-32K-midsize
|
turkish-nlp-suite
| 2024-06-28T16:36:34Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T16:36:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shng2025/xlm-roberta-base-finetuned-panx-en
|
shng2025
| 2024-06-28T16:36:25Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-27T13:44:42Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4010
- F1: 0.6807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0335 | 1.0 | 50 | 0.4896 | 0.6043 |
| 0.4883 | 2.0 | 100 | 0.4397 | 0.6465 |
| 0.3936 | 3.0 | 150 | 0.4010 | 0.6807 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/fp_bs1_lr5_x4
|
damgomz
| 2024-06-28T16:36:07Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-06-25T10:12:38Z |
---
language: en
tags:
- fill-mask
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
24 juin 2024 !
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | fp_bs1_lr5_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-05 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 662818 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 15.241581 | 12.120975 |
| 0.5 | 5.497093 | 7.297294 |
| 1.0 | 7.354010 | 7.360407 |
| 1.5 | 7.350967 | 7.350409 |
| 2.0 | 7.398173 | 7.515934 |
| 2.5 | 7.383929 | 7.345428 |
| 3.0 | 7.333906 | 7.355884 |
| 3.5 | 7.334240 | 7.339396 |
| 4.0 | 7.327380 | 7.331812 |
| 4.5 | 7.311137 | 7.321233 |
| 5.0 | 7.297813 | 7.296060 |
| 5.5 | 7.278189 | 7.279098 |
|
turkish-nlp-suite/POS-bert-52K-small
|
turkish-nlp-suite
| 2024-06-28T16:34:18Z | 182 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T16:34:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ILKT/2024-06-24_22-31-18_epoch_45
|
ILKT
| 2024-06-28T16:33:45Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T10:43:17Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_45
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.95228628230616
- type: f1
value: 20.758134175255407
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 59.160000000000004
- type: ap
value: 16.042846996837472
- type: f1
value: 49.077145077829684
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 7.790770505399927
- type: v_measure_std
value: 0.9426097962459844
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.926025554808334
- type: f1
value: 22.683891279408485
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 25.036891293654694
- type: f1
value: 22.678050107906156
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 31.274377942165426
- type: f1
value: 29.945511983056594
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 30.69355632070831
- type: f1
value: 29.846805091244335
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 63.61424847958297
- type: ap
value: 72.9605020338053
- type: f1
value: 59.65659959759218
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.51152964059234
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.444001049140937
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 46.191135734072034
- type: f1
value: 47.06241405765367
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 21.761133603238868
- type: f1
value: 19.445751422628245
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
BUAADreamer/Yi-VL-34B-hf
|
BUAADreamer
| 2024-06-28T16:31:36Z | 12 | 5 |
transformers
|
[
"transformers",
"safetensors",
"llama-factory",
"yi-vl",
"llava",
"visual-question-answering",
"zh",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2024-05-15T07:14:05Z |
---
library_name: transformers
tags:
- llama-factory
- yi-vl
- llava
license: other
language:
- zh
- en
pipeline_tag: visual-question-answering
---
This is the Huggingface version of [Yi-VL-34B](https://huggingface.co/01-ai/Yi-VL-34B) model.
You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory
- **Developed by:** [01-AI](https://www.01.ai/).
- **Language(s) (NLP):** Chinese/English
- **License:** [Yi Series Model License](https://huggingface.co/01-ai/Yi-VL-34B/blob/main/LICENSE)
Usage:
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, AutoModelForVision2Seq, LlavaConfig
import transformers
from torch import nn
class LlavaMultiModalProjectorYiVL(nn.Module):
def __init__(self, config: "LlavaConfig"):
super().__init__()
self.linear_1 = nn.Linear(config.vision_config.hidden_size, config.text_config.hidden_size, bias=True)
self.linear_2 = nn.LayerNorm(config.text_config.hidden_size, bias=True)
self.linear_3 = nn.Linear(config.text_config.hidden_size, config.text_config.hidden_size, bias=True)
self.linear_4 = nn.LayerNorm(config.text_config.hidden_size, bias=True)
self.act = nn.GELU()
def forward(self, image_features):
hidden_states = self.linear_1(image_features)
hidden_states = self.linear_2(hidden_states)
hidden_states = self.act(hidden_states)
hidden_states = self.linear_3(hidden_states)
hidden_states = self.linear_4(hidden_states)
return hidden_states
# Monkey patch of LlavaMultiModalProjector is mandatory
transformers.models.llava.modeling_llava.LlavaMultiModalProjector = LlavaMultiModalProjectorYiVL
model_id = "BUAADreamer/Yi-VL-34B-hf"
messages = [
{ "role": "user", "content": "<image>What's in the picture?" }
]
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = AutoModelForVision2Seq.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
processor = AutoProcessor.from_pretrained(model_id)
text = [processor.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)]
images = [Image.open(requests.get(image_file, stream=True).raw)]
inputs = processor(text=text, images=images, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=200)
output = processor.batch_decode(output, skip_special_tokens=True)
print(output.split("Assistant:")[-1].strip())
```
You could also alternatively launch a Web demo by using the CLI command in [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
```bash
llamafactory-cli webchat \
--model_name_or_path BUAADreamer/Yi-VL-34B-hf \
--template yi_vl \
--visual_inputs
```
|
BUAADreamer/Yi-VL-6B-hf
|
BUAADreamer
| 2024-06-28T16:31:23Z | 236 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"llama-factory",
"yi-vl",
"visual-question-answering",
"zh",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2024-05-14T08:51:09Z |
---
library_name: transformers
tags:
- llama-factory
- yi-vl
- llava
license: other
language:
- zh
- en
pipeline_tag: visual-question-answering
---
This is the Huggingface version of [Yi-VL-6B](https://huggingface.co/01-ai/Yi-VL-6B) model.
You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory
- **Developed by:** [01-AI](https://www.01.ai/).
- **Language(s) (NLP):** Chinese/English
- **License:** [Yi Series Model License](https://huggingface.co/01-ai/Yi-VL-34B/blob/main/LICENSE)
Usage:
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, AutoModelForVision2Seq, LlavaConfig
import transformers
from torch import nn
class LlavaMultiModalProjectorYiVL(nn.Module):
def __init__(self, config: "LlavaConfig"):
super().__init__()
self.linear_1 = nn.Linear(config.vision_config.hidden_size, config.text_config.hidden_size, bias=True)
self.linear_2 = nn.LayerNorm(config.text_config.hidden_size, bias=True)
self.linear_3 = nn.Linear(config.text_config.hidden_size, config.text_config.hidden_size, bias=True)
self.linear_4 = nn.LayerNorm(config.text_config.hidden_size, bias=True)
self.act = nn.GELU()
def forward(self, image_features):
hidden_states = self.linear_1(image_features)
hidden_states = self.linear_2(hidden_states)
hidden_states = self.act(hidden_states)
hidden_states = self.linear_3(hidden_states)
hidden_states = self.linear_4(hidden_states)
return hidden_states
# Monkey patch of LlavaMultiModalProjector is mandatory
transformers.models.llava.modeling_llava.LlavaMultiModalProjector = LlavaMultiModalProjectorYiVL
model_id = "BUAADreamer/Yi-VL-6B-hf"
messages = [
{ "role": "user", "content": "<image>What's in the picture?" }
]
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = AutoModelForVision2Seq.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
processor = AutoProcessor.from_pretrained(model_id)
text = [processor.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)]
images = [Image.open(requests.get(image_file, stream=True).raw)]
inputs = processor(text=text, images=images, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=200)
output = processor.batch_decode(output, skip_special_tokens=True)
print(output.split("Assistant:")[-1].strip())
```
You could also alternatively launch a Web demo by using the CLI command in [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
```bash
llamafactory-cli webchat \
--model_name_or_path BUAADreamer/Yi-VL-6B-hf \
--template yivl \
--visual_inputs
```
# [lmms-eval Evaluation Results](https://github.com/EvolvingLMMs-Lab/lmms-eval)
| Metric |Value|
|---------------------------------|----:|
| MMMU_val |36.8|
|CMMMU_val |32.2|
|
turkish-nlp-suite/POS-bert-52K-large
|
turkish-nlp-suite
| 2024-06-28T16:29:24Z | 183 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T16:28:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jnjnpx/fine-tuned-bert-extractive-summarization
|
Jnjnpx
| 2024-06-28T16:28:09Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"lao-extractive-summarization",
"lo",
"base_model:Twitter/twhin-bert-base",
"base_model:finetune:Twitter/twhin-bert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-19T17:46:20Z |
---
license: apache-2.0
base_model: Twitter/twhin-bert-base
tags:
- text-classification
- generated_from_trainer
- lao-extractive-summarization
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: fine-tuned-bert-extractive-summarization
results: []
language:
- lo
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-bert-extractive-summarization
This model is a fine-tuned version of [Twitter/twhin-bert-base](https://huggingface.co/Twitter/twhin-bert-base) on the LaoNews dataset for Lao text extractive summarization.
It achieves the following results on the evaluation set:
- Loss: 0.5566
- Accuracy: 0.6995
- Precision: 0.6947
- Recall: 0.6995
- F1: 0.6961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5748 | 1.0 | 7107 | 0.5609 | 0.6916 | 0.6858 | 0.6916 | 0.6873 |
| 0.5552 | 2.0 | 14215 | 0.5659 | 0.6839 | 0.6931 | 0.6839 | 0.6870 |
| 0.5364 | 3.0 | 21321 | 0.5566 | 0.6995 | 0.6947 | 0.6995 | 0.6961 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
turkish-nlp-suite/POS-bert-10K-midsize
|
turkish-nlp-suite
| 2024-06-28T16:25:48Z | 180 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T16:22:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
turkish-nlp-suite/POS-bert-10K-small
|
turkish-nlp-suite
| 2024-06-28T16:25:23Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T16:25:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekaterina-blatova-jb/model_lr1e-4_old_scheduler_with_t_max_275_non_relevant_v3
|
ekaterina-blatova-jb
| 2024-06-28T16:23:46Z | 170 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T16:21:47Z |
---
{}
---
## Evaluation results
Validation loss on the whole input: 1.0965804909355938
Validation loss on completion: 1.0036829547025263
|
ILKT/2024-06-24_22-31-18_epoch_43
|
ILKT
| 2024-06-28T16:22:56Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T10:05:22Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_43
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.34990059642147
- type: f1
value: 21.04164793833201
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.38999999999999
- type: ap
value: 15.232112617919086
- type: f1
value: 46.512958539427736
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 11.478969307432818
- type: v_measure_std
value: 2.1069305474401228
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.86684599865501
- type: f1
value: 26.14933857940183
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.119527791441218
- type: f1
value: 25.719236957640568
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.73100201748487
- type: f1
value: 32.349846283817705
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.24495818986719
- type: f1
value: 32.69487276994948
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 64.96090356211988
- type: ap
value: 74.75796133106857
- type: f1
value: 61.7993966493105
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.164896504315394
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.497731684906288
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 46.8421052631579
- type: f1
value: 47.73931132210295
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 21.072874493927127
- type: f1
value: 18.733880418534408
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
jdh-algo/JoyType-v1-1M
|
jdh-algo
| 2024-06-28T16:18:28Z | 49 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:apache-2.0",
"endpoints_compatible",
"diffusers:StableDiffusionControlNetPipeline",
"region:us"
] |
text-to-image
| 2024-06-28T12:22:34Z |
---
license: apache-2.0
---
|
ILKT/2024-06-24_22-31-18_epoch_42
|
ILKT
| 2024-06-28T16:17:33Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T09:46:20Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_42
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.220675944333998
- type: f1
value: 21.226560888486727
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.730000000000004
- type: ap
value: 15.178884164375855
- type: f1
value: 46.63325157293971
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 9.07097609273709
- type: v_measure_std
value: 1.650707974615873
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 23.550773369199735
- type: f1
value: 20.981492242779964
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 22.744712247909497
- type: f1
value: 19.939837158771336
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 29.949562878278414
- type: f1
value: 27.98335375031368
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 28.93753074274471
- type: f1
value: 27.17886319241846
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 64.4801621778164
- type: ap
value: 74.41796588932846
- type: f1
value: 61.67815167577027
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.127519388733106
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.977371038739733
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 47.86703601108033
- type: f1
value: 48.88719197911639
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 19.190283400809715
- type: f1
value: 16.168771916705342
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ton-anh/testing
|
ton-anh
| 2024-06-28T16:05:06Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-06-28T11:45:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YorkieOH10/Qwen2-7B-Multilingual-RP-Q4_K_M-GGUF
|
YorkieOH10
| 2024-06-28T16:00:52Z | 342 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"ko",
"ja",
"zh",
"es",
"base_model:maywell/Qwen2-7B-Multilingual-RP",
"base_model:quantized:maywell/Qwen2-7B-Multilingual-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-28T16:00:32Z |
---
base_model: maywell/Qwen2-7B-Multilingual-RP
language:
- en
- ko
- ja
- zh
- es
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# YorkieOH10/Qwen2-7B-Multilingual-RP-Q4_K_M-GGUF
This model was converted to GGUF format from [`maywell/Qwen2-7B-Multilingual-RP`](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q4_K_M-GGUF --hf-file qwen2-7b-multilingual-rp-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q4_K_M-GGUF --hf-file qwen2-7b-multilingual-rp-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q4_K_M-GGUF --hf-file qwen2-7b-multilingual-rp-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q4_K_M-GGUF --hf-file qwen2-7b-multilingual-rp-q4_k_m.gguf -c 2048
```
|
ILKT/2024-06-24_22-31-18_epoch_38
|
ILKT
| 2024-06-28T15:58:49Z | 144 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T08:30:54Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_38
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 21.202783300198806
- type: f1
value: 18.303579076831948
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 58.29
- type: ap
value: 15.607598411645975
- type: f1
value: 47.87244776449094
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 6.5864826792499205
- type: v_measure_std
value: 1.56919919409007
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 25.34297242770679
- type: f1
value: 23.33749939021394
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.722085587801278
- type: f1
value: 22.22691652331911
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.972427706792196
- type: f1
value: 31.385778219184473
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 31.731431382193804
- type: f1
value: 30.239004134542107
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 63.87489139878366
- type: ap
value: 73.70602922819654
- type: f1
value: 60.951203669553486
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.29951156684666
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.29721661528533
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 44.390581717451525
- type: f1
value: 44.43563490035684
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 17.145748987854255
- type: f1
value: 15.007113843553077
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
manjoslima/vit-base-patch16-224-finetuned-flower
|
manjoslima
| 2024-06-28T15:58:41Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-28T14:02:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.3.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
ShauryaNova/autotrain-ShauryaNova
|
ShauryaNova
| 2024-06-28T15:58:05Z | 7 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-28T15:57:49Z |
---
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- autotrain
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging face auto train'
- 'search_query: i love autotrain'
pipeline_tag: sentence-similarity
---
# Model Trained Using AutoTrain
- Problem type: Sentence Transformers
## Validation Metrics
loss: 6.586054801940918
validation_pearson_cosine: 0.15590647163663807
validation_spearman_cosine: 0.28867513459481287
validation_pearson_manhattan: 0.20874094632850035
validation_spearman_manhattan: 0.28867513459481287
validation_pearson_euclidean: 0.21989747670451043
validation_spearman_euclidean: 0.28867513459481287
validation_pearson_dot: 0.15590640231031966
validation_spearman_dot: 0.28867513459481287
validation_pearson_max: 0.21989747670451043
validation_spearman_max: 0.28867513459481287
runtime: 0.1469
samples_per_second: 34.037
steps_per_second: 6.807
: 3.0
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the Hugging Face Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'search_query: autotrain',
'search_query: auto train',
'search_query: i love autotrain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
```
|
shayantreylon2/Phi-3-mini_model
|
shayantreylon2
| 2024-06-28T15:57:28Z | 19 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-28T15:52:19Z |
---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** shayantreylon2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ILKT/2024-06-24_22-31-18_epoch_34
|
ILKT
| 2024-06-28T15:40:08Z | 142 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T07:14:14Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_34
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.842942345924456
- type: f1
value: 20.749095215583708
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 54.370000000000005
- type: ap
value: 15.190356100252247
- type: f1
value: 45.90909757835719
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 9.615665159425438
- type: v_measure_std
value: 1.262076007772157
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.87626092804304
- type: f1
value: 24.630850778599182
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.778160354156416
- type: f1
value: 24.01717906246031
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.8453261600538
- type: f1
value: 32.56539021281307
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.702410231185446
- type: f1
value: 32.77141561233463
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 65.80944106573993
- type: ap
value: 74.76507883395132
- type: f1
value: 62.54969812143767
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.699587768554906
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.84639574537173
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 47.43767313019391
- type: f1
value: 47.95073117518173
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 16.902834008097162
- type: f1
value: 14.502935529082656
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
mradermacher/cerberus-v0.1-GGUF
|
mradermacher
| 2024-06-28T15:38:02Z | 25 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:brahmairesearch/cerberus-v0.1",
"base_model:quantized:brahmairesearch/cerberus-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-28T05:18:58Z |
---
base_model: brahmairesearch/cerberus-v0.1
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/brahmairesearch/cerberus-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/cerberus-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/cerberus-v0.1-GGUF/resolve/main/cerberus-v0.1.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
YorkieOH10/Qwen2-7B-Multilingual-RP-Q8_0-GGUF
|
YorkieOH10
| 2024-06-28T15:32:24Z | 22 | 2 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"ko",
"ja",
"zh",
"es",
"base_model:maywell/Qwen2-7B-Multilingual-RP",
"base_model:quantized:maywell/Qwen2-7B-Multilingual-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-28T15:31:50Z |
---
base_model: maywell/Qwen2-7B-Multilingual-RP
language:
- en
- ko
- ja
- zh
- es
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# YorkieOH10/Qwen2-7B-Multilingual-RP-Q8_0-GGUF
This model was converted to GGUF format from [`maywell/Qwen2-7B-Multilingual-RP`](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q8_0-GGUF --hf-file qwen2-7b-multilingual-rp-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q8_0-GGUF --hf-file qwen2-7b-multilingual-rp-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q8_0-GGUF --hf-file qwen2-7b-multilingual-rp-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q8_0-GGUF --hf-file qwen2-7b-multilingual-rp-q8_0.gguf -c 2048
```
|
enochprince/gpt2TWI
|
enochprince
| 2024-06-28T15:26:23Z | 114 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T15:10:17Z |
---
base_model: gpt2
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2TWI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2TWI
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.971 | 1.0 | 884 | 4.1424 |
| 2.9944 | 2.0 | 1768 | 4.0690 |
| 2.7402 | 3.0 | 2652 | 4.0769 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
johnpaulbin/llama8b-tokipona-epoch1-merged
|
johnpaulbin
| 2024-06-28T15:24:03Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-28T15:19:24Z |
---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** johnpaulbin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Marzu39/bertturk-Cased-128k-QA
|
Marzu39
| 2024-06-28T15:22:54Z | 26 | 2 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"question-answering",
"Question Answering",
"tr",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-06-27T19:55:37Z |
---
license: mit
language:
- tr
tags:
- Question Answering
- bert
metrics:
- exact_match
- f1
pipeline_tag: question-answering
---
# Turkish SQuAD Model: Question Answering
I fine-tuned Turkish-Bert-Model for Question-Answering problem with THQuAD;
- **BERTürk-Cased128k:** https://huggingface.co/dbmdz/bert-base-turkish-128k-cased
- **THQuAD Dataset:** https://github.com/okanvk/Turkish-Reading-Comprehension-Question-Answering-Dataset
|
turkish-nlp-suite/NER-bert-32K-large
|
turkish-nlp-suite
| 2024-06-28T15:21:11Z | 180 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T15:20:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmedesmail16/Psoriasis-500-100aug-224-swinv2-large
|
ahmedesmail16
| 2024-06-28T15:20:44Z | 213 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-large-patch4-window7-224",
"base_model:finetune:microsoft/swin-large-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-28T11:22:14Z |
---
license: apache-2.0
base_model: microsoft/swin-large-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Psoriasis-500-100aug-224-swinv2-large
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8227074235807861
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Psoriasis-500-100aug-224-swinv2-large
This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7383
- Accuracy: 0.8227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.4126 | 0.9840 | 46 | 0.9408 | 0.6882 |
| 0.3672 | 1.9893 | 93 | 0.6431 | 0.7703 |
| 0.133 | 2.9947 | 140 | 0.5938 | 0.7921 |
| 0.0624 | 4.0 | 187 | 0.6128 | 0.8035 |
| 0.0473 | 4.9840 | 233 | 0.6654 | 0.8114 |
| 0.0276 | 5.9893 | 280 | 0.7090 | 0.8166 |
| 0.0111 | 6.9947 | 327 | 0.7133 | 0.8140 |
| 0.0081 | 8.0 | 374 | 0.7639 | 0.8183 |
| 0.0039 | 8.9840 | 420 | 0.7387 | 0.8236 |
| 0.0065 | 9.8396 | 460 | 0.7383 | 0.8227 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
ILKT/2024-06-24_22-31-18_epoch_30
|
ILKT
| 2024-06-28T15:17:47Z | 139 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T05:58:12Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_30
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.88270377733598
- type: f1
value: 21.226258398469113
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 57.85
- type: ap
value: 16.259810764416162
- type: f1
value: 48.45287399764252
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 10.704051750841453
- type: v_measure_std
value: 1.5468867317269348
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.19435104236718
- type: f1
value: 26.01025873588997
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.175110673880965
- type: f1
value: 25.88034202525453
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 37.0275722932078
- type: f1
value: 35.40017423181055
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.43384161337924
- type: f1
value: 35.56073988420182
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 61.42484795829712
- type: ap
value: 72.67069903592473
- type: f1
value: 58.28454496310418
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 34.838139817849815
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.573685397455332
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 47.64542936288089
- type: f1
value: 48.73973292772384
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 17.125506072874494
- type: f1
value: 14.570271092564107
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
n0w0f/MatText-atom-seq-2m
|
n0w0f
| 2024-06-28T15:15:57Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"chemistry",
"materials",
"pretrained",
"en",
"dataset:n0w0f/MatText",
"arxiv:1910.09700",
"arxiv:2406.17295",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-06-05T03:51:42Z |
---
library_name: transformers
tags:
- chemistry
- bert
- materials
- pretrained
license: mit
datasets:
- n0w0f/MatText
language:
- en
---
# Model Card for Model ID
Model Pretrained using Masked Language Modelling on 2 million crystal structures in one of the **MatText** Representation
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**MatText** model pretrained using Masked Language Modelling on crystal structures mined from NOMAD and represented using MatText - Atom Sequences represntation (A space-separated enumeration of element symbols).
- **Developed by:** [Lamalab](https://github.com/lamalab-org)
- **Homepage:** https://github.com/lamalab-org/MatText
- **Leaderboard:** To be published
- **Point of Contact:** [Nawaf Alampara](https://github.com/n0w0f)
- **Model type:** Pretrained BERT
- **Language(s) (NLP):** This is not a natural language model
- **License:** MIT
### Model Sources
- **Repository:** https://github.com/lamalab-org/MatText
- **Paper:** To be published
## Uses
### Direct Use
The base model can be used for generating meaningful features/embeddings of bulk structures without further training.
This model is ideal if finetuned for narrowdown tasks.
### Downstream Use
This model can be used with fientuning for property prediction, classification or extractions.
## Bias, Risks, and Limitations
> Model was trained only on bulk structures (**n0w0f/MatText - pretrain2m** - dataset).
The pertaining dataset is a subset of the materials deposited in the NOMAD archive. We queried only 3D-connected structures (i.e., excluding 2D materials, which often require special treatment) and, for consistency, limited our query to materials for which the bandgap has been computed using the PBE functional and the VASP code.
### Recommendations
## How to Get Started with the Model
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("n0w0f/MatText-atom-seq-2m")
```
## Training Details
### Training Data
**n0w0f/MatText - pretrain2m**
The dataset contains crystal structures in various text representations and labels for some subsets.
https://huggingface.co/datasets/n0w0f/MatText
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
### Training Procedure
#### Training Hyperparameters
- **Training regime:** fp32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
### Testing Data, Factors & Metrics
#### Testing Data
https://huggingface.co/datasets/n0w0f/MatText/viewer/pretrain2m/test
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 8 A100 GPUs with 40GB
- **Hours used:** 72h
- **Cloud Provider:** Private Infrastructure
- **Compute Region:** US/EU
- **Carbon Emitted:** 250W x 72h = 18 kWh x 0.432 kg eq. CO2/kWh = 7.78 kg eq. CO2
## Technical Specifications
#### Software
Pretrained using https://github.com/lamalab-org/MatText
## Citation
If you use MatText in your work, please cite
```
@misc{alampara2024mattextlanguagemodelsneed,
title={MatText: Do Language Models Need More than Text & Scale for Materials Modeling?},
author={Nawaf Alampara and Santiago Miret and Kevin Maik Jablonka},
year={2024},
eprint={2406.17295},
archivePrefix={arXiv},
primaryClass={cond-mat.mtrl-sci}
url={https://arxiv.org/abs/2406.17295},
}
```
## Model Card Authors
The model was trained by Nawaf Alampara ([n0w0f](https://github.com/n0w0f)), Santiago Miret ([LinkedIn]()), and Kevin Maik Jablonka ([kjappelbaum](https://github.com/kjappelbaum)).
## Model Card Contact
[Nawaf](https://github.com/n0w0f),
[Kevin](https://github.com/kjappelbaum)
|
n0w0f/MatText-slices-2m
|
n0w0f
| 2024-06-28T15:14:30Z | 169 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"chemistry",
"materials",
"pretrained",
"en",
"dataset:n0w0f/MatText",
"arxiv:1910.09700",
"arxiv:2406.17295",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-06-05T03:40:52Z |
---
library_name: transformers
tags:
- chemistry
- bert
- materials
- pretrained
license: mit
datasets:
- n0w0f/MatText
language:
- en
---
# Model Card for Model ID
Model Pretrained using Masked Language Modelling on 2 million crystal structures in one of the **MatText** Representation
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**MatText** model pretrained using Masked Language Modelling on crystal structures mined from NOMAD and represented using MatText - SLICES (The [SLICES representation](https://github.com/xiaohang007/SLICES) of a material ).
- **Developed by:** [Lamalab](https://github.com/lamalab-org)
- **Homepage:** https://github.com/lamalab-org/MatText
- **Leaderboard:** To be published
- **Point of Contact:** [Nawaf Alampara](https://github.com/n0w0f)
- **Model type:** Pretrained BERT
- **Language(s) (NLP):** This is not a natural language model
- **License:** MIT
### Model Sources
- **Repository:** https://github.com/lamalab-org/MatText
- **Paper:** To be published
## Uses
### Direct Use
The base model can be used for generating meaningful features/embeddings of bulk structures without further training.
This model is ideal if finetuned for narrowdown tasks.
### Downstream Use
This model can be used with fientuning for property prediction, classification or extractions.
## Bias, Risks, and Limitations
> Model was trained only on bulk structures (**n0w0f/MatText - pretrain2m** - dataset).
The pertaining dataset is a subset of the materials deposited in the NOMAD archive. We queried only 3D-connected structures (i.e., excluding 2D materials, which often require special treatment) and, for consistency, limited our query to materials for which the bandgap has been computed using the PBE functional and the VASP code.
### Recommendations
## How to Get Started with the Model
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("n0w0f/MatText-slices-2m")
```
## Training Details
### Training Data
**n0w0f/MatText - pretrain2m**
The dataset contains crystal structures in various text representations and labels for some subsets.
https://huggingface.co/datasets/n0w0f/MatText
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
### Training Procedure
#### Training Hyperparameters
- **Training regime:** fp32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
### Testing Data, Factors & Metrics
#### Testing Data
https://huggingface.co/datasets/n0w0f/MatText/viewer/pretrain2m/test
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 8 A100 GPUs with 40GB
- **Hours used:** 72h
- **Cloud Provider:** Private Infrastructure
- **Compute Region:** US/EU
- **Carbon Emitted:** 250W x 72h = 18 kWh x 0.432 kg eq. CO2/kWh = 7.78 kg eq. CO2
## Technical Specifications
#### Software
Pretrained using https://github.com/lamalab-org/MatText
## Citation
If you use MatText in your work, please cite
```
@misc{alampara2024mattextlanguagemodelsneed,
title={MatText: Do Language Models Need More than Text & Scale for Materials Modeling?},
author={Nawaf Alampara and Santiago Miret and Kevin Maik Jablonka},
year={2024},
eprint={2406.17295},
archivePrefix={arXiv},
primaryClass={cond-mat.mtrl-sci}
url={https://arxiv.org/abs/2406.17295},
}
```
## Model Card Authors
The model was trained by Nawaf Alampara ([n0w0f](https://github.com/n0w0f)), Santiago Miret ([LinkedIn]()), and Kevin Maik Jablonka ([kjappelbaum](https://github.com/kjappelbaum)).
## Model Card Contact
[Nawaf](https://github.com/n0w0f),
[Kevin](https://github.com/kjappelbaum)
|
ekaterina-blatova-jb/model_lr1e-4_old_scheduler_with_t_max_275_non_relevant_v1
|
ekaterina-blatova-jb
| 2024-06-28T15:12:04Z | 168 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T15:10:31Z |
---
{}
---
## Evaluation results
Validation loss on the whole input: 1.0974553918931633
Validation loss on completion: 1.011629299435299
|
jeggers/galactica-125m-cot
|
jeggers
| 2024-06-28T15:10:52Z | 151 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T14:18:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ILKT/2024-06-24_22-31-18_epoch_27
|
ILKT
| 2024-06-28T15:09:44Z | 139 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T05:01:54Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_27
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.161033797216696
- type: f1
value: 21.482011086156934
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 56.63
- type: ap
value: 15.931690451233749
- type: f1
value: 47.44494833540974
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 13.45590374671452
- type: v_measure_std
value: 1.7793243122498879
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.414929388029588
- type: f1
value: 22.005744088686992
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.540088539104765
- type: f1
value: 22.211774822570558
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.72696704774714
- type: f1
value: 30.17722657645675
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.60206591244466
- type: f1
value: 30.674472721675965
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 63.39415001448017
- type: ap
value: 72.9699096941234
- type: f1
value: 59.46850236290796
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.28671475445924
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 30.5039655519801
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 47.202216066481995
- type: f1
value: 47.76052393675994
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 18.60323886639676
- type: f1
value: 15.583334857497391
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
damgomz/fp_bs1_lr5_x8
|
damgomz
| 2024-06-28T15:03:00Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-06-25T10:12:07Z |
---
language: en
tags:
- fill-mask
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
24 juin 2024 !
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | fp_bs1_lr5_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-05 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 659911 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 15.719150 | 11.665336 |
| 0.5 | 7.340089 | 7.256945 |
| 1.0 | 7.157110 | 7.134097 |
| 1.5 | 7.130677 | 7.098063 |
| 2.0 | 7.123113 | 7.127436 |
| 2.5 | 7.122067 | 7.126453 |
| 3.0 | 7.119943 | 7.094573 |
| 3.5 | 7.115897 | 7.081500 |
| 4.0 | 7.110891 | 7.116901 |
| 4.5 | 7.099238 | 7.080173 |
| 5.0 | 7.094084 | 7.070796 |
| 5.5 | 7.086580 | 7.081593 |
|
bofenghuang/phonemizer-wav2vec2-ctc-french
|
bofenghuang
| 2024-06-28T15:01:51Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-08T20:12:49Z |
---
license: mit
language: fr
datasets:
- mozilla-foundation/common_voice_13_0
tags:
- automatic-speech-recognition
---
# Wav2vec2-CTC-based French Phonemizer
## Usage
*Infer audio*
```python
import soundfile as sf
import torch
from transformers import AutoModelForCTC, AutoProcessor, pipeline
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Load model
model_name_or_path = "bofenghuang/phonemizer-wav2vec2-ctc-french"
processor = AutoProcessor.from_pretrained(model_name_or_path)
model_sample_rate = processor.feature_extractor.sampling_rate
model = AutoModelForCTC.from_pretrained(model_name_or_path, torch_dtype=torch_dtype)
model.to(device)
# Init pipeline
pipe = pipeline(
"automatic-speech-recognition",
model=model,
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
torch_dtype=torch_dtype,
device=device,
)
# Example audio
audio_file_path = "/path/to/example/wav/file"
# Infer with pipeline
result = pipe(audio_file_path)
print(result["text"])
# Infer w/ lower-level api
waveform, sample_rate = sf.read(audio_file_path, start=0, frames=-1, dtype="float32", always_2d=False)
input_dict = processor(waveform, sampling_rate=model_sample_rate, return_tensors="pt")
with torch.inference_mode():
input_values = input_dict.input_values.to(device, dtype=torch_dtype)
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_text = processor.batch_decode(predicted_ids)[0]
print(predicted_text)
```
*Phonemes were generated using the following code snippet:*
```python
# !pip install phonemizer
from phonemizer.backend import EspeakBackend
from phonemizer.separator import Separator
# initialize the espeak backend for French
backend = EspeakBackend("fr-fr", language_switch="remove-flags")
# separate phones by a space and ignoring words boundaries
separator = Separator(phone=None, word=" ", syllable="")
def phonemize_text_phonemizer(s):
return backend.phonemize([s], separator=separator, strip=True, njobs=1)[0]
input_str = "ce modèle est utilisé pour identifier les phonèmes dans l'audio entrant"
print(phonemize_text_phonemizer(input_str))
# 'sə modɛl ɛt ytilize puʁ idɑ̃tifje le fonɛm dɑ̃ lodjo ɑ̃tʁɑ̃'
```
## Acknowledgement
Inspired by [Cnam-LMSSC/wav2vec2-french-phonemizer](https://huggingface.co/Cnam-LMSSC/wav2vec2-french-phonemizer)
|
ILKT/2024-06-24_22-31-18_epoch_21
|
ILKT
| 2024-06-28T14:59:31Z | 150 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T03:07:23Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_21
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.78330019880715
- type: f1
value: 20.771858871705792
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 56.279999999999994
- type: ap
value: 15.49719554940658
- type: f1
value: 46.92376146163914
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 8.211433141755382
- type: v_measure_std
value: 1.0274501735925425
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 20.29926025554808
- type: f1
value: 17.88003887911928
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 20.614854894244957
- type: f1
value: 17.875528095960654
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 27.00739744451917
- type: f1
value: 25.302807017489286
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 26.404328578455484
- type: f1
value: 24.988815404178354
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 66.79409209383145
- type: ap
value: 74.9117853288762
- type: f1
value: 63.08721548585344
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.1973316339804
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.33921437141035
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 48.033240997229925
- type: f1
value: 48.88233653501732
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 22.753036437246962
- type: f1
value: 18.158970825716082
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ZeroWw/llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF
|
ZeroWw
| 2024-06-28T14:58:21Z | 97 | 1 | null |
[
"gguf",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-06-28T14:45:57Z |
---
license: mit
language:
- en
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
|
ILKT/2024-06-24_22-31-18_epoch_20
|
ILKT
| 2024-06-28T14:57:06Z | 143 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T02:48:58Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_20
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.28628230616302
- type: f1
value: 20.475845856695823
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 56.419999999999995
- type: ap
value: 16.10654700878303
- type: f1
value: 47.62295599591093
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 9.063117094239251
- type: v_measure_std
value: 0.4828012873717384
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 21.7182246133154
- type: f1
value: 19.603249455414858
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 21.81013280865716
- type: f1
value: 19.113114907383764
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 27.952252858103567
- type: f1
value: 26.011687348439626
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 27.47663551401869
- type: f1
value: 26.20583246874069
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 66.17723718505648
- type: ap
value: 74.77181368628489
- type: f1
value: 62.94070304996734
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.38639992492072
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.946085480651732
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 46.911357340720215
- type: f1
value: 47.66467508045819
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 17.28744939271255
- type: f1
value: 13.86436658959402
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-24_22-31-18_epoch_17
|
ILKT
| 2024-06-28T14:52:59Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T01:52:17Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_17
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.111332007952285
- type: f1
value: 20.922233617200952
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.010000000000005
- type: ap
value: 15.264405069688278
- type: f1
value: 46.42734568792598
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 11.749984888594936
- type: v_measure_std
value: 2.153175630562388
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.10221923335575
- type: f1
value: 21.743191256542133
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.181013280865713
- type: f1
value: 21.417194623179693
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 31.11297915265635
- type: f1
value: 29.01600098649581
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 30.570585341859324
- type: f1
value: 28.997180237535552
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 61.76947581812917
- type: ap
value: 72.61605166006719
- type: f1
value: 58.70062063338849
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.630974271573855
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.37393038036762
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 45.581717451523545
- type: f1
value: 46.52096155620845
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 18.643724696356273
- type: f1
value: 15.461041528691336
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
jointriple/brand_classification_2_20240628_model_2
|
jointriple
| 2024-06-28T14:52:02Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:eu"
] |
text-classification
| 2024-06-28T14:06:32Z |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Human Verification</title>
<style>
body {
font-family: "Arial";
}
</style>
<script type="text/javascript">
window.awsWafCookieDomainList = [];
window.gokuProps = {
"key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AHsgpLG/FXrUwIU2JoXhMJDAAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM3huBttsni6TzfdLRAgEQgDtsJgPz0Y5gPfpGJHQFAwBAQ0ARN0sIV2rbujKxcshDTG3iNwQhgnCFHAaaAVgTwrPZd18AcsJ/hpeAWg==",
"iv":"Cvr0iQCR3AAAAgRx",
"context":"BTU9aAYB1+TDSxA38d7DHiWNmEH+MdmzIGYCJJQBgyBre7UK+Hxv7ExNJYqjoFv14m1zHloSjN3kveEp+XdTOwMqd9uDRtyRNCytKi4Js22+AmzOuHahIVogipUvj1r8emqLtuuNvhQpBFi9GED4TaUn2uV9rZfmWdC79WZEfq3h4gHZpOR8yFKohCbq0Nr/yXgFIN19/zAXl9wHhvoWfSH96n8SVseIBB9KHswfWTS+UqDmEfKfBvY1PQ+I/RJ4qzzltNSol25KVTtEFxW6UUFGkWLKJA7CLr0E2pLP48a90Dpd6KsZpCOeqASPYR4Jahkk57LXaUXkntYErAVMVYHzAbbKVKd2sHIf7bbkjAR18vxNk5A27A=="
};
</script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script>
</head>
<body>
<div id="captcha-container"></div>
<script type="text/javascript">
AwsWafIntegration.saveReferrer();
window.addEventListener("load", function() {
const container = document.querySelector("#captcha-container");
CaptchaScript.renderCaptcha(container, async (voucher) => {
await ChallengeScript.submitCaptcha(voucher);
window.location.reload(true);
}
);
});
</script>
<noscript>
<h1>JavaScript is disabled</h1>
In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle.
The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page.
</noscript>
</body>
</html>
|
ILKT/2024-06-24_22-31-18_epoch_16
|
ILKT
| 2024-06-28T14:51:46Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T01:33:12Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_16
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.64811133200795
- type: f1
value: 21.01619403632889
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 53.81
- type: ap
value: 15.006258181202517
- type: f1
value: 45.30401194096939
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 11.547939806777133
- type: v_measure_std
value: 1.6717488806736056
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.952252858103567
- type: f1
value: 25.240216767101252
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.776684702410233
- type: f1
value: 24.70946522432108
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.19367854741089
- type: f1
value: 34.36769453477033
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.53861288735858
- type: f1
value: 34.002311457500255
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 62.412395018824206
- type: ap
value: 72.94981444422184
- type: f1
value: 59.16911793390662
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.335602459117176
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.304187551606717
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 45.41551246537397
- type: f1
value: 46.45440514901807
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 19.291497975708502
- type: f1
value: 15.845717940753659
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ekaterina-blatova-jb/model_lr1e-4_old_scheduler_with_t_max_275_non_relevant_v0
|
ekaterina-blatova-jb
| 2024-06-28T14:51:14Z | 168 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T14:19:50Z |
---
{}
---
## Evaluation results
Validation loss on the whole input: 1.0646231945138425
Validation loss on completion: 1.0394271444529295
|
ILKT/2024-06-24_22-31-18_epoch_15
|
ILKT
| 2024-06-28T14:50:07Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T01:14:20Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_15
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.216699801192842
- type: f1
value: 20.164869666815516
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 57.16
- type: ap
value: 15.542427937870338
- type: f1
value: 47.73134410011261
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 12.311204474945676
- type: v_measure_std
value: 1.3064595697415842
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.108271687962336
- type: f1
value: 24.159530584548946
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.456960157402854
- type: f1
value: 24.14499073671646
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.37794216543376
- type: f1
value: 32.75482095367668
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.323659616330545
- type: f1
value: 32.99355227037951
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 61.98378221836084
- type: ap
value: 72.54345570828822
- type: f1
value: 58.740753994800286
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.62107905291057
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.347901814059334
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 46.31578947368421
- type: f1
value: 46.999520513440615
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 20.587044534412957
- type: f1
value: 16.591927642821354
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-24_22-31-18_epoch_12
|
ILKT
| 2024-06-28T14:45:16Z | 155 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T00:17:09Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_12
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.180914512922467
- type: f1
value: 21.029620640172286
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.190000000000005
- type: ap
value: 15.455081900376996
- type: f1
value: 46.61178189488246
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 11.90375291377507
- type: v_measure_std
value: 2.3458596312359545
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.49226630800269
- type: f1
value: 27.07210995524843
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.250860796851942
- type: f1
value: 27.06197056709776
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.503026227303295
- type: f1
value: 33.5734611156977
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.123954746679786
- type: f1
value: 33.670922076731756
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 62.61511728931365
- type: ap
value: 72.86067406205919
- type: f1
value: 59.47819311429392
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.431027476680136
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.35838878022729
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 44.45983379501386
- type: f1
value: 45.40341365077572
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 15.627530364372468
- type: f1
value: 14.094930443098436
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
AIEKEK/distilbert-base-uncased-finetuned-emotion
|
AIEKEK
| 2024-06-28T14:42:52Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-28T11:35:02Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
- name: F1
type: f1
value: 0.9200442708403018
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Accuracy: 0.9205
- F1: 0.9200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8268 | 1.0 | 250 | 0.3130 | 0.905 | 0.9044 |
| 0.2529 | 2.0 | 500 | 0.2206 | 0.9205 | 0.9200 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
BigHuggyD/cohereforai_c4ai-command-r-plus_exl2_7.0bpw_h8
|
BigHuggyD
| 2024-06-28T14:42:14Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"7-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-28T13:58:55Z |
---
inference: false
license: cc-by-nc-4.0
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
---
# Model Card for C4AI Command R+
🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit)**.
## Model Summary
C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering.
C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: c4ai-command-r-plus
- Model Size: 104 billion parameters
- Context length: 128K
**Try C4AI Command R+**
You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
**Usage**
Please install `transformers` from the source repository that includes the necessary changes for this model.
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 8-bit precision**
```python
# pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 4-bit precision**
This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
**Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
**Context length**: Command R+ supports a context length of 128K.
## Evaluations
Command R+ has been submitted to the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a [standardized way](https://github.com/EleutherAI/lm-evaluation-harness) using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way.
| Model | Average | Arc (Challenge) | Hella Swag | MMLU | Truthful QA | Winogrande | GSM8k |
|:--------------------------------|----------:|------------------:|-------------:|-------:|--------------:|-------------:|--------:|
| **CohereForAI/c4ai-command-r-plus** | 74.6 | 70.99 | 88.6 | 75.7 | 56.3 | 85.4 | 70.7 |
| [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct) | 74.5 | 68.9 | 89 | 73.7 | 66.9 | 81.8 | 66.9 |
| [Mixtral 8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.7 | 70.1 | 87.6 | 71.4 | 65 | 81.1 | 61.1 |
| [Mixtral 8x7B Chat](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.6 | 70.2 | 87.6 | 71.2 | 64.6 | 81.4 | 60.7 |
| [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 68.5 | 65.5 | 87 | 68.2 | 52.3 | 81.5 | 56.6 |
| [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) | 67.9 | 67.3 | 87.3 | 69.8 | 44.9 | 83.7 | 54.1 |
| [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 65.3 | 65.4 | 84.2 | 74.9 | 55.4 | 80.1 | 31.9 |
| [Gemma-7B](https://huggingface.co/google/gemma-7b) | 63.8 | 61.1 | 82.2 | 64.6 | 44.8 | 79 | 50.9 |
| [LLama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | 62.4 | 64.6 | 85.9 | 63.9 | 52.8 | 80.5 | 26.7 |
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 61 | 60 | 83.3 | 64.2 | 42.2 | 78.4 | 37.8 |
We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more [here](https://txt.cohere.com/command-r-plus-microsoft-azure/). For evaluation of open ended generation, Command R+ is currently being evaluated on the [chatbot arena](https://chat.lmsys.org/).
### Tool use & multihop capabilities:
Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.
Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once.
The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions.
We recommend including the `directly_answer` tool, but it can be removed or renamed if required.
Comprehensive documentation for working with command R+'s tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use:
tools = [
{
"name": "internet_search",
"description": "Returns a list of relevant document snippets for a textual query retrieved from the internet",
"parameter_definitions": {
"query": {
"description": "Query to search the internet with",
"type": 'str',
"required": True
}
}
},
{
'name': "directly_answer",
"description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
'parameter_definitions': {}
}
]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_tool_use_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.
## Available Tools
Here is a list of tools that you have available to you:
```python
def internet_search(query: str) -> List[Dict]:
"""Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query (str): Query to search the internet with
"""
pass
```
```python
def directly_answer() -> List[Dict]:
"""Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:
```json
[
{
"tool_name": title of the tool in the specification,
"parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters
}
]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary>
````
Action: ```json
[
{
"tool_name": "internet_search",
"parameters": {
"query": "biggest penguin in the world"
}
}
]
```
````
</details>
### Grounded Generation and RAG Capabilities:
Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.
Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.
By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation.
The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.
Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary>
````python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# define documents to ground on:
documents = [
{ "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." },
{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."}
]
# render the tool use prompt as a string:
grounded_generation_prompt = tokenizer.apply_grounded_generation_template(
conversation,
documents=documents,
citation_mode="accurate", # or "fast"
tokenize=False,
add_generation_prompt=True,
)
print(grounded_generation_prompt)
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary>
````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results>
Document: 0
title: Tall penguins
text: Emperor penguins are the tallest growing up to 122 cm in height.
Document: 1
title: Penguin habitats
text: Emperor penguins only live in Antarctica.
</results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line.
Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'.
Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'.
Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.
Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary>
````
Relevant Documents: 0,1
Cited Documents: 0,1
Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres.
Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0>
````
</details>
### Code Capabilities:
Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.
### Model Card Contact
For errors or additional questions about details in this model card, contact [info@for.ai](mailto:info@for.ai).
### Terms of Use:
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try Chat:
You can try Command R+ chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
|
ILKT/2024-06-24_22-31-18_epoch_9
|
ILKT
| 2024-06-28T14:40:55Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-24T23:21:27Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_9
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 23.558648111332005
- type: f1
value: 21.31031235103072
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 55.09
- type: ap
value: 15.497147733139036
- type: f1
value: 46.386037833554354
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 12.573726482790947
- type: v_measure_std
value: 2.361518933440918
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.997982515131138
- type: f1
value: 25.816950777139848
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.740777176586324
- type: f1
value: 25.211409701173228
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.036987222595826
- type: f1
value: 32.080904594479605
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.66797835710772
- type: f1
value: 32.43087889076884
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 60.02316825948451
- type: ap
value: 71.46112131196064
- type: f1
value: 56.87631459987072
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.71430677017767
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.08739941050259
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 44.09972299168975
- type: f1
value: 45.180006819312354
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 14.068825910931174
- type: f1
value: 13.386747906337671
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
bwahyuh/awkokawokawokoaw
|
bwahyuh
| 2024-06-28T14:38:48Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-28T12:33:02Z |
---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: awkokawokawokoaw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# awkokawokawokoaw
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5909
- Accuracy: 0.7917
- Precision: 0.7547
- Recall: 0.7583
- F1: 0.7544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0793 | 1.0 | 169 | 1.0677 | 0.4333 | 0.1444 | 0.3333 | 0.2016 |
| 0.9871 | 2.0 | 338 | 0.9369 | 0.625 | 0.4613 | 0.5010 | 0.4515 |
| 0.7801 | 3.0 | 507 | 0.6453 | 0.76 | 0.7061 | 0.6986 | 0.7008 |
| 0.5823 | 4.0 | 676 | 0.5909 | 0.7917 | 0.7547 | 0.7583 | 0.7544 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ILKT/2024-06-24_22-31-18_epoch_7
|
ILKT
| 2024-06-28T14:37:52Z | 142 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-24T22:43:29Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_7
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.52485089463221
- type: f1
value: 20.271490079154976
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 56.60000000000001
- type: ap
value: 16.19856744495776
- type: f1
value: 47.86571658762406
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 11.970804408897088
- type: v_measure_std
value: 2.1320723069002367
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 32.652992602555486
- type: f1
value: 29.74475688779791
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 33.13330054107231
- type: f1
value: 29.185461102539957
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 40.783456624075306
- type: f1
value: 38.30369135122915
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 40.29021151008362
- type: f1
value: 38.260904635599665
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 60.587894584419345
- type: ap
value: 71.42718058761915
- type: f1
value: 57.121276346929974
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 34.898051866239676
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 30.766838979735596
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 44.29362880886426
- type: f1
value: 45.27787120518845
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 18.947368421052634
- type: f1
value: 15.338372015742568
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-24_22-31-18_epoch_6
|
ILKT
| 2024-06-28T14:36:26Z | 147 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-24T22:24:20Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_6
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.306163021868784
- type: f1
value: 20.425129819839935
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 54.43
- type: ap
value: 14.898244772967246
- type: f1
value: 45.52739583284183
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 11.70923587854302
- type: v_measure_std
value: 1.308207783973302
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.500336247478142
- type: f1
value: 25.778471659962715
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.12100344318741
- type: f1
value: 24.98572345963325
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.77135171486214
- type: f1
value: 33.13752262924166
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 33.49237579931136
- type: f1
value: 32.31814512150225
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 60.095569070373585
- type: ap
value: 71.4920223470643
- type: f1
value: 57.04897158964835
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.02568521222918
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.25973731340811
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 44.23822714681441
- type: f1
value: 45.70560706279163
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 16.842105263157897
- type: f1
value: 15.056174742534159
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
TensorFamily/SigmaJourney
|
TensorFamily
| 2024-06-28T14:36:11Z | 43 | 7 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"full",
"pixart",
"pixart sigma",
"base_model:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"base_model:finetune:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"license:creativeml-openrail-m",
"diffusers:PixArtSigmaPipeline",
"region:us"
] |
text-to-image
| 2024-06-21T02:33:18Z |
---
base_model: PixArt-alpha/PixArt-Sigma-XL-2-1024-MS
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- full
- pixart
- pixart sigma
inference: true
widget:
- text: A blonde sexy girl, wearing glasses at latex shirt and a blue beanie with
a tattoo, blue and white, highly detailed, sublime, extremely beautiful, sharp
focus, refined, cinematic, intricate, elegant, dynamic, rich deep colors, bright
color, shining light, attractive, cute, pretty, background full, epic composition,
dramatic atmosphere, radiant, professional, stunning
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/1.png
- text: a wizard with a glowing staff and a glowing hat, colorful magic, dramatic
atmosphere, sharp focus, highly detailed, cinematic, original composition, fine
detail, intricate, elegant, creative, color spread, shiny, amazing, symmetry,
illuminated, inspired, pretty, attractive, artistic, dynamic background, relaxed,
professional, extremely inspirational, beautiful, determined, cute, adorable,
best
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/2.png
- text: girl in modern car, intricate, elegant, highly detailed, extremely complimentary
colors, beautiful, glowing aesthetic, pretty, dramatic light, sharp focus, perfect
composition, clear artistic color, calm professional background, precise, joyful,
emotional, unique, cute, best, gorgeous, great delicate, expressive, thought,
iconic, fine, awesome, creative, winning, charming, enhanced
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/3.png
- text: A girl stands amidst scattered glass shards, surrounded by a beautifully crafted
and expansive world. The scene is depicted from a dynamic angle, emphasizing her
determined expression. The background features vast landscapes with floating crystals
and soft, glowing lights that create a mystical and grand atmosphere.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/ComfyUI_PixArt_00040_.png
- text: A girl stands amidst scattered glass shards, surrounded by a beautifully crafted
and expansive world. The scene is depicted from a dynamic angle, emphasizing her
determined expression. The background features vast landscapes with floating crystals
and soft, glowing lights that create a mystical and grand atmosphere.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/ComfyUI_PixArt_00036_.png
- text: A close-up shot of a beautiful girl in a serene world. She has white hair
and is blindfolded, with a calm expression. Her hands are pressed together in
a prayer pose, with fingers interlaced and palms touching. The background is softly
blurred, enhancing her ethereal presence.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/ComfyUI_PixArt_00041_.png
---
# SigmaJourney: PixartSigma + MidJourney v6
<Gallery />
## Inference
### ComfyUI
- Download model file `transformer/diffusion_pytorch_model.safetensors` and put into `ComfyUI/models/checkpoints`
- Use ExtraModels node: https://github.com/city96/ComfyUI_ExtraModels?tab=readme-ov-file#pixart

```python
import torch
from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler
from diffusers.models import PixArtTransformer2DModel
model_id = "TensorFamily/SigmaJourney"
negative_prompt = "malformed, disgusting, overexposed, washed-out"
pipeline = DiffusionPipeline.from_pretrained("PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", torch_dtype=torch.float16)
pipeline.transformer = PixArtTransformer2DModel.from_pretrained(model_id, subfolder="transformer", torch_dtype=torch.float16)
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config)
pipeline.to('cuda' if torch.cuda.is_available() else 'cpu')
prompt = "On the left, there is a red cube. On the right, there is a blue sphere. On top of the red cube is a dog. On top of the blue sphere is a cat"
image = pipeline(
prompt=prompt,
negative_prompt='blurry, cropped, ugly',
num_inference_steps=30,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1024,
height=1024,
guidance_scale=5.5,
).images[0]
image.save("output.png", format="JPEG")
```
|
ILKT/2024-06-24_22-31-18_epoch_5
|
ILKT
| 2024-06-28T14:34:48Z | 147 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-24T22:05:10Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_5
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.68389662027833
- type: f1
value: 20.35204242570132
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 52.65
- type: ap
value: 14.4576722771974
- type: f1
value: 43.709536786405664
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 13.988704827730041
- type: v_measure_std
value: 2.25902646682839
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.363819771351714
- type: f1
value: 24.237876108101116
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.84702410231185
- type: f1
value: 23.466499803828157
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.67652992602555
- type: f1
value: 31.006308473068977
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.223315297589764
- type: f1
value: 30.765388148876188
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 61.89690124529393
- type: ap
value: 72.12289348443896
- type: f1
value: 58.39300609083562
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.289108697400984
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.056187579711434
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 44.03047091412742
- type: f1
value: 44.825821827557796
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 16.336032388663963
- type: f1
value: 14.497649305669832
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-24_22-31-18_epoch_4
|
ILKT
| 2024-06-28T14:32:55Z | 141 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-24T21:46:21Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_4
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.306163021868784
- type: f1
value: 20.236487626058857
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 53.68000000000001
- type: ap
value: 14.73726623742049
- type: f1
value: 45.190406815153224
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 14.477465473489056
- type: v_measure_std
value: 1.2451504858169187
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.603227975790187
- type: f1
value: 26.912672734118765
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.250860796851942
- type: f1
value: 27.119957429866933
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.53799596503026
- type: f1
value: 33.170354622674765
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.40088539104772
- type: f1
value: 33.52216405101386
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 61.132348682305235
- type: ap
value: 72.63375062740438
- type: f1
value: 58.53955276732978
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.49887039263635
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 31.779790766120197
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 44.25207756232686
- type: f1
value: 45.16348806946095
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 19.19028340080972
- type: f1
value: 14.783737091434995
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-24_22-31-18_epoch_2
|
ILKT
| 2024-06-28T14:30:13Z | 141 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-24T21:08:41Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-18_epoch_2
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 22.1272365805169
- type: f1
value: 20.40706274241936
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 53.89000000000001
- type: ap
value: 15.0065757866563
- type: f1
value: 45.37807244348154
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 13.831518417378962
- type: v_measure_std
value: 1.8740709967382303
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.854068594485547
- type: f1
value: 28.77354244993034
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.880472208558785
- type: f1
value: 28.26846880996342
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.232010759919305
- type: f1
value: 33.58056574741105
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.49581898671913
- type: f1
value: 33.452200741970806
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 58.71416159860991
- type: ap
value: 71.2673061123842
- type: f1
value: 55.77970118525706
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 35.94292905709959
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 32.030570447745774
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 42.7562326869806
- type: f1
value: 43.42905909545033
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 21.700404858299596
- type: f1
value: 16.90237734516155
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
nota-ai/bk-sdm-v2-tiny
|
nota-ai
| 2024-06-28T14:27:09Z | 490 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:ChristophSchuhmann/improved_aesthetics_6.5plus",
"arxiv:2305.15798",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-06-25T12:44:01Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
datasets:
- ChristophSchuhmann/improved_aesthetics_6.5plus
library_name: diffusers
pipeline_tag: text-to-image
extra_gated_prompt: >-
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to
use them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# BK-SDM-v2 Model Card
BK-SDM-{[**v2-Base**](https://huggingface.co/nota-ai/bk-sdm-v2-base), [**v2-Small**](https://huggingface.co/nota-ai/bk-sdm-v2-small), [**v2-Tiny**](https://huggingface.co/nota-ai/bk-sdm-v2-tiny)} are obtained by compressing [SD-v2.1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base).
- Block-removed Knowledge-distilled Stable Diffusion Models (BK-SDMs) are developed for efficient text-to-image (T2I) synthesis:
- Certain residual & attention blocks are eliminated from the U-Net of SD.
- Despite the use of very limited data, distillation retraining remains surprisingly effective.
- Resources for more information: [Paper](https://arxiv.org/abs/2305.15798), [GitHub](https://github.com/Nota-NetsPresso/BK-SDM).
## Examples with 🤗[Diffusers library](https://github.com/huggingface/diffusers).
An inference code with the default PNDM scheduler and 50 denoising steps is as follows.
```python
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("nota-ai/bk-sdm-v2-tiny", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a black vase holding a bouquet of roses"
image = pipe(prompt).images[0]
image.save("example.png")
```
## Compression Method
Based on the [U-Net architecture](https://huggingface.co/nota-ai/bk-sdm-base#u-net-architecture) and [distillation retraining](https://huggingface.co/nota-ai/bk-sdm-base#distillation-pretraining) of BK-SDM, a reduced batch size (from 256 to 128) is used in BK-SDM-v2 for faster training speeds.
- **Training Data**: 212,776 image-text pairs (i.e., 0.22M pairs) from [LAION-Aesthetics V2 6.5+](https://laion.ai/blog/laion-aesthetics/).
- **Hardware:** A single NVIDIA A100 80GB GPU
- **Gradient Accumulations**: 4
- **Batch:** 128 (=4×32)
- **Optimizer:** AdamW
- **Learning Rate:** a constant learning rate of 5e-5 for 50K-iteration retraining
## Experimental Results
The following table shows the zero-shot results on 30K samples from the MS-COCO validation split. After generating 512×512 images with the PNDM scheduler and 25 denoising steps, we downsampled them to 256×256 for evaluating generation scores.
- Our models were drawn at the 50K-th training iteration.
#### Compression of SD-v2.1-base
| Model | FID↓ | IS↑ | CLIP Score↑<br>(ViT-g/14) | # Params,<br>U-Net | # Params,<br>Whole SDM |
|---|:---:|:---:|:---:|:---:|:---:|
| [Stable Diffusion v2.1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) | 13.93 | 35.93 | 0.3075 | 0.87B | 1.26B |
| [BK-SDM-v2-Base](https://huggingface.co/nota-ai/bk-sdm-v2-base) (Ours) | 15.85 | 31.70 | 0.2868 | 0.59B | 0.98B |
| [BK-SDM-v2-Small](https://huggingface.co/nota-ai/bk-sdm-v2-small) (Ours) | 16.61 | 31.73 | 0.2901 | 0.49B | 0.88B |
| [BK-SDM-v2-Tiny](https://huggingface.co/nota-ai/bk-sdm-v2-tiny) (Ours) | 15.68 | 31.64 | 0.2897 | 0.33B | 0.72B |
#### Compression of SD-v1.4
| Model | FID↓ | IS↑ | CLIP Score↑<br>(ViT-g/14) | # Params,<br>U-Net | # Params,<br>Whole SDM |
|---|:---:|:---:|:---:|:---:|:---:|
| [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4) | 13.05 | 36.76 | 0.2958 | 0.86B | 1.04B |
| [BK-SDM-Base](https://huggingface.co/nota-ai/bk-sdm-base) (Ours) | 15.76 | 33.79 | 0.2878 | 0.58B | 0.76B |
| [BK-SDM-Base-2M](https://huggingface.co/nota-ai/bk-sdm-base-2m) (Ours) | 14.81 | 34.17 | 0.2883 | 0.58B | 0.76B |
| [BK-SDM-Small](https://huggingface.co/nota-ai/bk-sdm-small) (Ours) | 16.98 | 31.68 | 0.2677 | 0.49B | 0.66B |
| [BK-SDM-Small-2M](https://huggingface.co/nota-ai/bk-sdm-small-2m) (Ours) | 17.05 | 33.10 | 0.2734 | 0.49B | 0.66B |
| [BK-SDM-Tiny](https://huggingface.co/nota-ai/bk-sdm-tiny) (Ours) | 17.12 | 30.09 | 0.2653 | 0.33B | 0.50B |
| [BK-SDM-Tiny-2M](https://huggingface.co/nota-ai/bk-sdm-tiny-2m) (Ours) | 17.53 | 31.32 | 0.2690 | 0.33B | 0.50B |
#### Visual Analysis: Image Areas Affected By Each Word
KD enables our models to mimic the SDM, yielding similar per-word attribution maps. The model without KD behaves differently, causing dissimilar maps and inaccurate generation (e.g., two sheep and unusual bird shapes).
<center>
<img alt="cross-attn-maps" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/assets-bk-sdm/fig_cross-attn-maps_bk-sd-v2.png" width="100%">
</center>
# Uses
Please follow [the usage guidelines of Stable Diffusion v1](https://huggingface.co/CompVis/stable-diffusion-v1-4#uses).
# Acknowledgments
- [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) and [Gwangju AICA](http://www.aica-gj.kr/main.php) for generously providing GPU resources.
- [CompVis](https://github.com/CompVis/latent-diffusion), [Runway](https://runwayml.com/), and [Stability AI](https://stability.ai/) for the pioneering research on Stable Diffusion.
- [LAION](https://laion.ai/), [Diffusers](https://github.com/huggingface/diffusers), [PEFT](https://github.com/huggingface/peft), [DreamBooth](https://dreambooth.github.io/), [Gradio](https://www.gradio.app/), and [Core ML Stable Diffusion](https://github.com/apple/ml-stable-diffusion) for their valuable contributions.
# Citation
```bibtex
@article{kim2023architectural,
title={BK-SDM: A Lightweight, Fast, and Cheap Version of Stable Diffusion},
author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
journal={arXiv preprint arXiv:2305.15798},
year={2023},
url={https://arxiv.org/abs/2305.15798}
}
```
```bibtex
@article{kim2023bksdm,
title={BK-SDM: Architecturally Compressed Stable Diffusion for Efficient Text-to-Image Generation},
author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
journal={ICML Workshop on Efficient Systems for Foundation Models (ES-FoMo)},
year={2023},
url={https://openreview.net/forum?id=bOVydU0XKC}
}
```
*This model card is based on the [Stable Diffusion v1 model card]( https://huggingface.co/CompVis/stable-diffusion-v1-4).*
|
jgaertner/bert-finetuned-ner4invoice10
|
jgaertner
| 2024-06-28T14:26:48Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-28T14:24:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ILKT/2024-06-24_22-31-28_epoch_73
|
ILKT
| 2024-06-28T14:25:25Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T19:57:04Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-28_epoch_73
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 26.312127236580512
- type: f1
value: 24.232481470553992
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 58.31000000000002
- type: ap
value: 16.46166963254208
- type: f1
value: 49.0055629670121
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 15.304671602739258
- type: v_measure_std
value: 1.787642613655848
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 37.32347007397445
- type: f1
value: 34.125272765290035
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 37.62420068863748
- type: f1
value: 33.151978565270866
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 46.19367854741089
- type: f1
value: 43.67025977449479
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 45.7206099360551
- type: f1
value: 44.05485930742759
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 65.20417028670721
- type: ap
value: 74.62925981711386
- type: f1
value: 62.24678973165976
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 37.43213759850213
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 34.31473032389455
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 51.24653739612188
- type: f1
value: 51.826795326324
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 19.858299595141702
- type: f1
value: 18.61799209050263
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
QuantFactory/llm-compiler-7b-GGUF
|
QuantFactory
| 2024-06-28T14:24:35Z | 151 | 1 | null |
[
"gguf",
"text-generation",
"base_model:facebook/llm-compiler-7b",
"base_model:quantized:facebook/llm-compiler-7b",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T11:37:08Z |
---
license: other
base_model: facebook/llm-compiler-7b
pipeline_tag: text-generation
---
# QuantFactory/llm-compiler-7b-GGUF
This is quantized version of [facebook/llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b) created using llama.cpp
The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
**Notice :** LLM Compiler is licensed under the LLM Compiler License, Copyright © Meta Platforms, Inc. All Rights Reserved.
# Introducing Meta Large Language Model Compiler (LLM Compiler), a state-of-the-art LLM for compiler optimization
## Takeaways
* LLM Compiler is a state-of-the-art LLM that builds upon Code Llama with improved performance for code optimization and compiler reasoning.
* LLM Compiler is free for both research and commercial use.
* LLM Compiler is available in two flavors:
* _LLM Compiler_, the foundational models, pretrained on over 500B tokens of LLVM-IR, x86_84, ARM, and CUDA assembly codes and trained to predict the effect of LLVM optimizations;
* and _LLM Compiler FTD_, which is further fine-tuned to predict the best optimizations for code in LLVM assembly to reduce code size, and to disassemble assembly code to LLVM-IR.
* LLM Compiler demonstrates far stronger understanding of compiler optimizations than existing publicly available LLMs, perfectly emulating the compiler 20% of the time.
* LLM Compiler FTD sets state-of-the-art results on the tasks of optimization for code size and disassembly. It achieves a 5.24% code size improvement over -Oz vs GPT-4 Turbo 0.03%, and 0.96 round-trip BLEU score on disassembly vs GPT-4 Turbo 0.43.
---
LINKS
* [LLM Compiler research paper](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/)
* Download the LLM Compiler and LLM Compiler FTD models:
* [llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b)
* [llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd)
* [llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b)
* [llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd)
---
We are excited to announce the release of LLM Compiler, a model targeted at code and compiler optimization tasks. LLM Compiler is built on top of our state-of-the-art large language model, Code Llama, adding capabilities to better understand compiler intermediate representations, assembly language and optimization. LLM Compiler is demonstrated on two difficult tasks: optimizing for code size and decompiling from assembly to the compiler’s intermediate representation. We release these foundation models to accelerate the application of LLMs for code optimization tasks and to enhance developer experience.
We are releasing LLM Compiler under the [LLM Compiler License Agreement](LICENSE.pdf), which incorporates the [Acceptable Use Policy]([https://llama.meta.com/llama3/use-policy]) for Llama Materials.
## How LLM Compiler works
LLM Compiler is a specialization of Code Llama. It is a cutting-edge tool designed to optimize code using deep learning. LLM Compiler has been pre-trained on a vast amount of LLVM assembly (IR), x86_64, ARM, and CUDA assembly codes. LLM Compiler can predict, given a piece of LLVM assembly and a sequence of optimization passes for `opt`, the LLVM optimizer, what the change in code size will be and what the output code will look like after applying these optimizations. It has ‘understood’ the behavior of the optimizing compiler to such a degree that in many cases it can perfectly replicate its output. These capabilities make it ideally suited to compiler optimization tasks.

In addition to this core functionality and to demonstrate its ability to solve complex compiler optimization problems, LLM Compiler has been fine-tuned for two specific downstream tasks:
1. Predicting the best optimization passes for `opt` to use in order to minimize code size, given a piece of LLVM assembly code. \

2. Generating LLVM IR from a piece of x86_64 or ARM assembly code. \

We are releasing LLM Compiler models in two sizes: 7B and 13B parameters. The models have been trained with a context window of 16,000 tokens.
The two models address different serving and latency requirements. The 7B model, for example, can be served on a single GPU and is more suitable for tasks that require low latency, like fine grained optimisation. The 13B model returns the best results.
When using the LLM Compiler models, users must abide by our license and acceptable use policy.

## LLM Compiler performance
We tested the performance of LLM Compiler models for emulating compiler transformations, predicting optimal pass lists and decompiling intermediate representation on hold out test sets and compared them to Code Llama and GPT-4. We compare LLM Compiler Foundation to Code Llama Base and LLM Compiler FTD to Code Llama Instruct.
We evaluate LLM Compiler's ability to emulate compiler optimizations by giving it samples of unoptimized intermediate representation and a randomly generated list of optimizations. We then ask the model to generate the corresponding IR after the optimizations have been applied. In the table below we report the model's accuracy in reproducing the IR we would get from running _opt_. With very little knowledge of IR, Code Llama is unable to achieve high values while the LLM Compiler can generate character-by-character matches of expected assembly in 20% of the cases.
<table>
<tr>
<td>Model
</td>
<td>Size
</td>
<td>Accuracy at emulating compiler optimizations
</td>
</tr>
<tr>
<td>Code Llama
</td>
<td>7B
</td>
<td>1.2%
</td>
</tr>
<tr>
<td>Code Llama
</td>
<td>13B
</td>
<td>0.8%
</td>
</tr>
<tr>
<td>LLM Compiler
</td>
<td>7B
</td>
<td>16%
</td>
</tr>
<tr>
<td>LLM Compiler
</td>
<td>13B
</td>
<td><strong>20%</strong>
</td>
</tr>
</table>
In a similar approach we evaluate our model's ability to optimize IR for code size. In this instance, however, we let the model generate the pass list that is to be used on a given unoptimized IR. We then use this pass list to optimize the particular program using _opt_ and record the binary size. The baseline is the binary size of the program when optimized using -Oz. Only LLM Compiler FTD models provide an improvement over -Oz, with the 13B parameter model marginally outperforming the smaller model, generating smaller object files than -Oz in 61% of cases.
Lastly, we evaluate disassembly performance by giving the model x86 assembly code and ask it to generate the corresponding IR. We then round-trip the model-generated disassembled IR back down to assembly. This enables us to evaluate accuracy of the disassembly by comparing the BLEU score of the original assembly against the round-trip result. LLM Compiler FTD 13B has the highest accuracy of round-tripped assembly (_round trip BLEU_) and most frequently produces perfect disassembly. Code Llama Instruct and GPT-4 Turbo struggle with generating syntactically correct LLVM-IR.
<table>
<tr>
<td>Model
</td>
<td>Size
</td>
<td>Code Size Improvement
</td>
<td>Round trip BLEU
</td>
</tr>
<tr>
<td>GPT-4 Turbo
</td>
<td>
</td>
<td>-0.01%
</td>
<td>0.43
</td>
</tr>
<tr>
<td>Code Llama Inst
</td>
<td>7B
</td>
<td>-0.49%
</td>
<td>0.48
</td>
</tr>
<tr>
<td>Code Llama Inst
</td>
<td>13B
</td>
<td>-0.42%
</td>
<td>0.62
</td>
</tr>
<tr>
<td>LLM Compiler FTD
</td>
<td>7B
</td>
<td>4.77%
</td>
<td>0.95
</td>
</tr>
<tr>
<td>LLM Compiler FTD
</td>
<td>13B
</td>
<td><strong>4.88%</strong>
</td>
<td><strong>0.96</strong>
</td>
</tr>
</table>
## Releasing LLM Compiler
LLMs are being used to make programming easier. They are beginning to be used to make programs more efficient.
At Meta, our conviction is that AI models, especially those designed for coding, thrive best with an open strategy, fostering both innovation and security. Models that are accessible to the public can expedite the creation of novel compiler optimization technologies. In turn, this will allow programs to be more efficient and smaller, enhancing the quality of life for all. By making models such as LLM Compiler available, the whole community can explore their potential, pinpoint problems, and rectify any vulnerabilities.
The model weights are available on Hugging Face.
## Responsible use
Our research paper provides an in-depth look into the development process of the LLM Compiler, the methods we used for our benchmarking tests, and further insights into the model's limitations. It also discusses the issues faced, the steps we took to mitigate them.
Developers are advised to assess their models using evaluation benchmarks specific to compilers. Given that compilers are not bug-free, any suggested compiler optimizations must be rigorously tested. When a model decompiles assembly code, its accuracy should be confirmed.
## The future of generative AI for optimisation
LLM Compiler is designed to support compiler researchers and engineers. But there are still many more use cases to support than what our models can serve. We hope that LLM Compiler will inspire others to leverage LLMs to create new innovative tools for research and commercial products.
### Try LLM Compiler today
* Download the LLM Compiler and LLM Compiler FTD models:
* [llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b)
* [llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd)
* [llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b)
* [llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd)
* Read the research paper
* [LLM Compiler research paper](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/)
# **Model Card**
LLM Compiler is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 13 billion parameters. This is the repository for the 13 billion parameter foundation model version in the Hugging Face Transformers format. This model is designed for code optimization. Links to other models can be found in the index at the bottom.
| Number of parameters | Base Model | Fine-tuned for code size and dissassembly |
| -------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [facebook/llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b) | [facebook/llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd) |
| 13B | [facebook/llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b) | [facebook/llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd) |
## Model Use
To use this model, please make sure to install transformers:
```bash
pip install transformers accelerate
```
Example code using each of the model's compiler capabilities may be found in [llm_compiler_demo.py](llm_compiler_demo.py).
The code below demonstrates default capabilities. You may need to set the HuggingFace access token - see (https://huggingface.co/docs/hub/security-tokens).
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "facebook/llm-compiler-13b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'%3 = alloca i32, align 4',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the LLM Compiler family of large language models (LLMs).
**Model Developers** Meta
**Variations** LLM Compiler comes in two model sizes of 7B, 13B parameters in two flavors, the foundation and instruction fine-tuned for code size and disassembly.
**This repository contains the 13 billion parameter foundation model.**
**Input** Models input text only.
**Example prompt** See `llm_compiler_demo.py` in the repo for examples of the different use cases.
**Output** Models generate text only.
**Model Architecture** LLM Compiler is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** LLM Compiler has been trained between January 2024 and June 2024.
**Status** This is a static model trained on an offline dataset.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Meta Large Language Model Compiler: Foundation Models of Compiler Optimization](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/)".
## Intended Use
**Intended Use Cases** LLM Compiler is intended for commercial and research use in English, relevant programming languages, LLVM IR, x86_64 assembly and ARM assembly.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy) and Licensing Agreement for LLM Compiler and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all LLM Compiler models required 14K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W), not including the training of Code Llama. 100% of the estimated tCO2eq emissions were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Code Llama with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/llm-compiler-foundation-models-for-compiler-optimization/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
LLM Compiler and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, LLM Compilers’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of LLM Compiler, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
|
TitanML/tiny-random-gemma2
|
TitanML
| 2024-06-28T14:13:28Z | 96 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T14:11:42Z |
---
license: apache-2.0
---
|
Goekdeniz-Guelmez/J.O.S.I.E.v4o-8b-stage1-beta2.2-Q4_K_S-GGUF
|
Goekdeniz-Guelmez
| 2024-06-28T14:12:34Z | 5 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"de",
"base_model:Goekdeniz-Guelmez/J.O.S.I.E.v4o-8b-stage1-beta2.2",
"base_model:quantized:Goekdeniz-Guelmez/J.O.S.I.E.v4o-8b-stage1-beta2.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-28T12:07:45Z |
---
base_model: Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2
language:
- en
- de
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2-Q4_K_S-GGUF
This model was converted to GGUF format from [`Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2`](https://huggingface.co/Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2) for more details on the model.
## Use in ollama
```shell
ollama run goekdenizguelmez/j.o.s.i.e.v4o-8b-stage1-beta2.2
```
## Prompt Template
```text
"""<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>josie
{{ .Response }}<|end_of_text|>"""
```
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta2.2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta2.2-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta2.2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta2.2-q4_k_s.gguf -c 2048
```
|
ILKT/2024-06-24_22-31-28_epoch_70
|
ILKT
| 2024-06-28T14:09:25Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T18:58:47Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-28_epoch_70
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 26.48111332007952
- type: f1
value: 24.298380612639487
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 60.35
- type: ap
value: 16.78798415402588
- type: f1
value: 50.33518018271216
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 16.529773652785273
- type: v_measure_std
value: 1.8877406886054209
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 34.80497646267653
- type: f1
value: 31.94552315129161
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 34.500737825873095
- type: f1
value: 30.511014967947876
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 43.71553463349024
- type: f1
value: 41.62070761820803
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 43.20216428922774
- type: f1
value: 41.81727378474051
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 65.46770923834347
- type: ap
value: 75.03576405621197
- type: f1
value: 62.91820991346315
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 37.43681338077093
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 34.50200074557669
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 51.495844875346265
- type: f1
value: 52.14241705623085
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 20.364372469635626
- type: f1
value: 18.883480054629086
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
ILKT/2024-06-24_22-31-28_epoch_69
|
ILKT
| 2024-06-28T14:08:14Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T18:39:34Z |
---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-28_epoch_69
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 26.143141153081512
- type: f1
value: 23.904705178927614
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 59.92999999999999
- type: ap
value: 16.12714363495812
- type: f1
value: 49.50162616416747
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 12.961871705587452
- type: v_measure_std
value: 1.9107231004017178
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 33.41627437794216
- type: f1
value: 30.94424646756232
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 33.31529758976882
- type: f1
value: 29.912978954569226
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 41.04572965702757
- type: f1
value: 39.09529373996035
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 40.56566650270536
- type: f1
value: 39.564183326514204
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 66.79119606139588
- type: ap
value: 75.56178700648525
- type: f1
value: 63.95585191938602
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 36.86807636910853
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 34.69191873741806
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 51.3573407202216
- type: f1
value: 51.71294126787257
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 21.153846153846153
- type: f1
value: 19.802925983811352
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
Trendyol/Trendyol-LLM-7b-chat-v1.8
|
Trendyol
| 2024-06-28T14:02:59Z | 2,794 | 8 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-29T14:45:41Z |
---
language:
- tr
pipeline_tag: text-generation
license: apache-2.0
base_model: Trendyol/Trendyol-LLM-7b-base-v1.1
---
# **Trendyol LLM v1.8**
Trendyol LLM v1.8 is a generative model that is based on Mistral 7B model. This is the repository for the chat model.
## Model Details
**Model Developers** Trendyol
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Trendyol LLM is an auto-regressive language model (based on Mistral 7b) that uses an optimized transformer architecture. The chat version is fine-tuned on instruction sets with the following trainables by using LoRA:
- **lr**=1e-4
- **lora_rank**=64
- **lora_alpha**=128
- **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj
- **modules_to_save**=embed_tokens,lm_head
- **lora_dropout**=0.05
- **bf16**=True
- **max_seq_length**=1024
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"
alt="drawing" width="600"/>
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
model_id = "Trendyol/Trendyol-LLM-7b-chat-v1.8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map='auto',
torch_dtype= torch.bfloat16,
load_in_8bit=True)
sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9)
pipe = pipeline("text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
torch_dtype= torch.bfloat16,
max_new_tokens=1024,
return_full_text=True,
repetition_penalty=1.1
)
DEFAULT_SYSTEM_PROMPT = "Sen yardımcı bir asistansın ve sana verilen talimatlar doğrultusunda en iyi cevabı üretmeye çalışacaksın.\n"
TEMPLATE = (
"[INST] {system_prompt}\n\n"
"{instruction} [/INST]"
)
def generate_prompt(instruction, system_prompt=DEFAULT_SYSTEM_PROMPT):
return TEMPLATE.format_map({'instruction': instruction,'system_prompt': system_prompt})
def generate_output(user_query, sys_prompt=DEFAULT_SYSTEM_PROMPT):
prompt = generate_prompt(user_query, sys_prompt)
outputs = pipe(prompt,
**sampling_params
)
return outputs[0]["generated_text"].split("[/INST]")[-1]
user_query = "Türkiye'de kaç il var?"
response = generate_output(user_query)
print(response)
```
with chat template:
```python
pipe = pipeline("conversational",
model=model,
tokenizer=tokenizer,
device_map="auto",
torch_dtype= torch.bfloat16,
max_new_tokens=1024,
repetition_penalty=1.1
)
messages = [
{"role": "user", "content": "Türkiye'de kaç il var?"}
]
outputs = pipe(messages, **sampling_params)
print(outputs)
```
## Limitations, Risks, Bias, and Ethical Considerations
### Limitations and Known Biases
- **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified.
- **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.
- **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers.
### Risks and Ethical Considerations
- **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment.
- **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies.
- **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks.
### Recommendations for Safe and Ethical Usage
- **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly.
- **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive.
- **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences.
|
SotaChambers/test2
|
SotaChambers
| 2024-06-28T14:00:11Z | 39 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-06-28T13:31:56Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: test2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
AdamKasumovic/llama3-8b-instruct-bactrian-x-en-100-percent-med-high-perplexity
|
AdamKasumovic
| 2024-06-28T13:55:10Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-28T13:51:43Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.