modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 12:28:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 12:28:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
wendy41/llama-2-koen-user111-100-nll
|
wendy41
| 2024-04-22T11:52:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T11:52:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trokhmanenko/layoutlmv3-finetuned-cord_100
|
trokhmanenko
| 2024-04-22T11:51:51Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-04-19T13:31:52Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6249
- Precision: 0.8764
- Recall: 0.8947
- F1: 0.8854
- Accuracy: 0.8469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 5.26 | 100 | 0.5571 | 0.8014 | 0.8619 | 0.8305 | 0.8249 |
| No log | 10.53 | 200 | 0.5038 | 0.8561 | 0.8838 | 0.8697 | 0.8454 |
| No log | 15.79 | 300 | 0.6271 | 0.8710 | 0.8758 | 0.8734 | 0.8297 |
| No log | 21.05 | 400 | 0.6114 | 0.8783 | 0.9001 | 0.8891 | 0.8520 |
| 0.3312 | 26.32 | 500 | 0.6249 | 0.8764 | 0.8947 | 0.8854 | 0.8469 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
FredDYyy/whisper-small-dv
|
FredDYyy
| 2024-04-22T11:45:06Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-22T09:54:20Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - FredDYyy
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.648850714608617
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - FredDYyy
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1753
- Wer Ortho: 63.3540
- Wer: 13.6489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1212 | 1.63 | 500 | 0.1753 | 63.3540 | 13.6489 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
wendy41/llama-2-koen-user0-200-nll
|
wendy41
| 2024-04-22T11:43:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T11:43:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
InferenceIllusionist/llama3-42b-v0-iMat-GGUF
|
InferenceIllusionist
| 2024-04-22T11:42:31Z | 102 | 12 | null |
[
"gguf",
"llama3",
"iMat",
"arxiv:2403.17887",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-21T20:44:24Z |
---
tags:
- gguf
- llama3
- iMat
---
<img src="https://i.imgur.com/P68dXux.png" width="400"/>
# llama3-42b-v0-iMat-GGUF
Quantized from fp32 with love. All credits to [Charles Goddard](https://huggingface.co/chargoddard) for the original model.
* Weighted quantizations were calculated using groups_merged.txt with 105 chunks (recommended amount for this file) and n_ctx=512. Special thanks to jukofyork for sharing [this process](https://huggingface.co/jukofyork/WizardLM-2-8x22B-imatrix)
For more information on the pruning technique utilized in this model: https://arxiv.org/abs/2403.17887
Brief rundown of [iMatrix quant performance](https://github.com/ggerganov/llama.cpp/pull/5747)
<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
FP16 model card can be found [here](https://huggingface.co/chargoddard/llama3-42b-v0)
|
baek26/bart-cnndm
|
baek26
| 2024-04-22T11:40:51Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-20T03:43:59Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: cnn_dailymail_8824_bart-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cnn_dailymail_8824_bart-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9201
- Rouge1: 0.2472
- Rouge2: 0.1256
- Rougel: 0.2063
- Rougelsum: 0.2331
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.2077 | 0.11 | 500 | 1.0668 | 0.2378 | 0.1128 | 0.1955 | 0.2228 | 20.0 |
| 1.1503 | 0.22 | 1000 | 1.0418 | 0.2376 | 0.1145 | 0.1964 | 0.223 | 20.0 |
| 1.1191 | 0.33 | 1500 | 1.0109 | 0.2409 | 0.1187 | 0.1995 | 0.2268 | 20.0 |
| 1.0828 | 0.45 | 2000 | 1.0048 | 0.2408 | 0.1192 | 0.2004 | 0.227 | 20.0 |
| 1.0546 | 0.56 | 2500 | 0.9911 | 0.2417 | 0.1206 | 0.2008 | 0.2278 | 20.0 |
| 1.0537 | 0.67 | 3000 | 0.9891 | 0.2418 | 0.1201 | 0.2014 | 0.2277 | 20.0 |
| 1.0643 | 0.78 | 3500 | 0.9895 | 0.2396 | 0.1194 | 0.1997 | 0.2259 | 20.0 |
| 1.0375 | 0.89 | 4000 | 0.9775 | 0.2434 | 0.122 | 0.2025 | 0.2293 | 20.0 |
| 1.013 | 1.0 | 4500 | 0.9728 | 0.244 | 0.1218 | 0.2029 | 0.2298 | 20.0 |
| 1.0247 | 1.11 | 5000 | 0.9705 | 0.243 | 0.1206 | 0.2019 | 0.2287 | 20.0 |
| 1.0374 | 1.23 | 5500 | 0.9642 | 0.2432 | 0.1217 | 0.2022 | 0.2292 | 20.0 |
| 1.0084 | 1.34 | 6000 | 0.9609 | 0.2437 | 0.1235 | 0.204 | 0.2299 | 20.0 |
| 1.0195 | 1.45 | 6500 | 0.9603 | 0.243 | 0.1221 | 0.2029 | 0.2291 | 20.0 |
| 0.9642 | 1.56 | 7000 | 0.9559 | 0.2438 | 0.1228 | 0.2035 | 0.2301 | 20.0 |
| 0.9903 | 1.67 | 7500 | 0.9540 | 0.243 | 0.1225 | 0.2029 | 0.2293 | 20.0 |
| 0.976 | 1.78 | 8000 | 0.9518 | 0.2434 | 0.1224 | 0.2025 | 0.2297 | 19.9997 |
| 1.0101 | 1.89 | 8500 | 0.9460 | 0.2452 | 0.1235 | 0.2042 | 0.231 | 20.0 |
| 0.9711 | 2.01 | 9000 | 0.9446 | 0.2431 | 0.1226 | 0.2032 | 0.2295 | 19.9995 |
| 0.9137 | 2.12 | 9500 | 0.9463 | 0.2459 | 0.1239 | 0.205 | 0.2318 | 20.0 |
| 0.9631 | 2.23 | 10000 | 0.9410 | 0.2451 | 0.1234 | 0.2043 | 0.2309 | 19.9999 |
| 0.9309 | 2.34 | 10500 | 0.9399 | 0.2446 | 0.1236 | 0.2042 | 0.2308 | 19.9991 |
| 0.9653 | 2.45 | 11000 | 0.9363 | 0.2444 | 0.1233 | 0.2039 | 0.2308 | 19.9999 |
| 0.9338 | 2.56 | 11500 | 0.9413 | 0.2439 | 0.1224 | 0.2028 | 0.2294 | 20.0 |
| 0.9373 | 2.67 | 12000 | 0.9334 | 0.245 | 0.1241 | 0.2047 | 0.2312 | 19.9996 |
| 0.9661 | 2.79 | 12500 | 0.9334 | 0.2456 | 0.1241 | 0.2051 | 0.2318 | 19.9999 |
| 0.9446 | 2.9 | 13000 | 0.9340 | 0.2447 | 0.1239 | 0.2045 | 0.2309 | 19.9999 |
| 0.9109 | 3.01 | 13500 | 0.9340 | 0.2445 | 0.1234 | 0.2041 | 0.2308 | 19.9999 |
| 0.8955 | 3.12 | 14000 | 0.9357 | 0.2459 | 0.1249 | 0.2055 | 0.2318 | 20.0 |
| 0.9163 | 3.23 | 14500 | 0.9319 | 0.2461 | 0.1239 | 0.205 | 0.2319 | 20.0 |
| 0.9059 | 3.34 | 15000 | 0.9320 | 0.2446 | 0.124 | 0.2044 | 0.2309 | 19.9997 |
| 0.8893 | 3.46 | 15500 | 0.9288 | 0.2462 | 0.1247 | 0.2053 | 0.2322 | 19.9999 |
| 0.8963 | 3.57 | 16000 | 0.9301 | 0.2441 | 0.124 | 0.2043 | 0.2306 | 20.0 |
| 0.8924 | 3.68 | 16500 | 0.9295 | 0.2431 | 0.1236 | 0.2038 | 0.2296 | 19.9997 |
| 0.8832 | 3.79 | 17000 | 0.9267 | 0.2457 | 0.1237 | 0.2049 | 0.2316 | 19.9999 |
| 0.8874 | 3.9 | 17500 | 0.9263 | 0.2458 | 0.125 | 0.2054 | 0.232 | 20.0 |
| 0.8464 | 4.01 | 18000 | 0.9272 | 0.2446 | 0.1234 | 0.2039 | 0.2305 | 20.0 |
| 0.8391 | 4.12 | 18500 | 0.9253 | 0.2453 | 0.1245 | 0.205 | 0.2313 | 20.0 |
| 0.8602 | 4.24 | 19000 | 0.9273 | 0.2464 | 0.1248 | 0.2055 | 0.2322 | 19.9997 |
| 0.8674 | 4.35 | 19500 | 0.9260 | 0.2449 | 0.1242 | 0.2047 | 0.2309 | 20.0 |
| 0.8634 | 4.46 | 20000 | 0.9261 | 0.2462 | 0.1248 | 0.2053 | 0.2322 | 20.0 |
| 0.8522 | 4.57 | 20500 | 0.9259 | 0.2456 | 0.1242 | 0.2052 | 0.2316 | 20.0 |
| 0.8532 | 4.68 | 21000 | 0.9256 | 0.2452 | 0.1242 | 0.2049 | 0.2315 | 20.0 |
| 0.8608 | 4.79 | 21500 | 0.9218 | 0.2446 | 0.1242 | 0.2049 | 0.2309 | 19.9997 |
| 0.8649 | 4.9 | 22000 | 0.9239 | 0.2461 | 0.1243 | 0.2047 | 0.2317 | 19.9997 |
| 0.8329 | 5.02 | 22500 | 0.9260 | 0.2456 | 0.1248 | 0.2052 | 0.2315 | 19.9999 |
| 0.8475 | 5.13 | 23000 | 0.9247 | 0.2449 | 0.1241 | 0.2045 | 0.2309 | 20.0 |
| 0.8595 | 5.24 | 23500 | 0.9246 | 0.2443 | 0.1239 | 0.2044 | 0.2306 | 20.0 |
| 0.8707 | 5.35 | 24000 | 0.9228 | 0.2458 | 0.1246 | 0.2054 | 0.2318 | 19.9997 |
| 0.8565 | 5.46 | 24500 | 0.9243 | 0.245 | 0.1241 | 0.2047 | 0.231 | 20.0 |
| 0.848 | 5.57 | 25000 | 0.9232 | 0.2464 | 0.1256 | 0.206 | 0.2324 | 20.0 |
| 0.8251 | 5.68 | 25500 | 0.9212 | 0.2465 | 0.1253 | 0.2057 | 0.2327 | 20.0 |
| 0.8352 | 5.8 | 26000 | 0.9203 | 0.245 | 0.1242 | 0.2043 | 0.2309 | 19.9996 |
| 0.837 | 5.91 | 26500 | 0.9178 | 0.2464 | 0.1247 | 0.2055 | 0.2321 | 19.9999 |
| 0.8233 | 6.02 | 27000 | 0.9204 | 0.2456 | 0.1247 | 0.2052 | 0.2318 | 20.0 |
| 0.8169 | 6.13 | 27500 | 0.9246 | 0.2454 | 0.1242 | 0.205 | 0.2314 | 20.0 |
| 0.8351 | 6.24 | 28000 | 0.9194 | 0.2453 | 0.1248 | 0.2052 | 0.2312 | 20.0 |
| 0.8275 | 6.35 | 28500 | 0.9221 | 0.2468 | 0.1255 | 0.2062 | 0.2329 | 19.9999 |
| 0.818 | 6.46 | 29000 | 0.9244 | 0.2456 | 0.1243 | 0.205 | 0.2316 | 20.0 |
| 0.8262 | 6.58 | 29500 | 0.9194 | 0.2471 | 0.1256 | 0.2064 | 0.233 | 20.0 |
| 0.8138 | 6.69 | 30000 | 0.9225 | 0.2469 | 0.1257 | 0.2062 | 0.233 | 20.0 |
| 0.8476 | 6.8 | 30500 | 0.9188 | 0.2467 | 0.1254 | 0.2059 | 0.2328 | 20.0 |
| 0.8376 | 6.91 | 31000 | 0.9216 | 0.2473 | 0.1255 | 0.2064 | 0.2331 | 20.0 |
| 0.7947 | 7.02 | 31500 | 0.9218 | 0.2471 | 0.1256 | 0.2061 | 0.2329 | 19.9999 |
| 0.7937 | 7.13 | 32000 | 0.9241 | 0.2465 | 0.1249 | 0.2057 | 0.2324 | 19.9996 |
| 0.8194 | 7.24 | 32500 | 0.9230 | 0.2471 | 0.1259 | 0.2063 | 0.2329 | 20.0 |
| 0.8122 | 7.36 | 33000 | 0.9204 | 0.2458 | 0.125 | 0.2055 | 0.232 | 19.9996 |
| 0.7676 | 7.47 | 33500 | 0.9232 | 0.2468 | 0.1253 | 0.206 | 0.2327 | 20.0 |
| 0.7772 | 7.58 | 34000 | 0.9226 | 0.2463 | 0.1251 | 0.2057 | 0.2323 | 20.0 |
| 0.809 | 7.69 | 34500 | 0.9197 | 0.2469 | 0.1255 | 0.2061 | 0.2329 | 19.9997 |
| 0.7839 | 7.8 | 35000 | 0.9205 | 0.2475 | 0.1261 | 0.2067 | 0.2334 | 19.9997 |
| 0.7936 | 7.91 | 35500 | 0.9186 | 0.2469 | 0.1254 | 0.2061 | 0.2327 | 19.9997 |
| 0.8108 | 8.02 | 36000 | 0.9215 | 0.2472 | 0.1253 | 0.206 | 0.2329 | 20.0 |
| 0.7987 | 8.14 | 36500 | 0.9219 | 0.2473 | 0.1254 | 0.2062 | 0.2331 | 19.9999 |
| 0.7881 | 8.25 | 37000 | 0.9213 | 0.2474 | 0.1253 | 0.206 | 0.233 | 20.0 |
| 0.8007 | 8.36 | 37500 | 0.9215 | 0.2474 | 0.1258 | 0.2064 | 0.2332 | 20.0 |
| 0.7789 | 8.47 | 38000 | 0.9226 | 0.2462 | 0.1252 | 0.2054 | 0.2321 | 20.0 |
| 0.8155 | 8.58 | 38500 | 0.9182 | 0.2465 | 0.1254 | 0.206 | 0.2325 | 19.9999 |
| 0.7863 | 8.69 | 39000 | 0.9187 | 0.2465 | 0.1252 | 0.2059 | 0.2323 | 19.9999 |
| 0.796 | 8.8 | 39500 | 0.9201 | 0.2469 | 0.1254 | 0.206 | 0.2327 | 19.9999 |
| 0.8003 | 8.92 | 40000 | 0.9197 | 0.2463 | 0.1252 | 0.2057 | 0.2323 | 20.0 |
| 0.803 | 9.03 | 40500 | 0.9206 | 0.2465 | 0.1253 | 0.2058 | 0.2323 | 19.9997 |
| 0.79 | 9.14 | 41000 | 0.9221 | 0.2467 | 0.1251 | 0.206 | 0.2326 | 19.9997 |
| 0.7605 | 9.25 | 41500 | 0.9211 | 0.247 | 0.1254 | 0.2059 | 0.2329 | 20.0 |
| 0.7543 | 9.36 | 42000 | 0.9214 | 0.2473 | 0.1258 | 0.2065 | 0.2333 | 19.9999 |
| 0.7959 | 9.47 | 42500 | 0.9203 | 0.2471 | 0.1255 | 0.2061 | 0.2332 | 19.9999 |
| 0.7826 | 9.58 | 43000 | 0.9205 | 0.2469 | 0.1256 | 0.206 | 0.2329 | 20.0 |
| 0.7835 | 9.7 | 43500 | 0.9198 | 0.2466 | 0.1252 | 0.2057 | 0.2326 | 20.0 |
| 0.7809 | 9.81 | 44000 | 0.9205 | 0.2469 | 0.1253 | 0.206 | 0.2328 | 20.0 |
| 0.7899 | 9.92 | 44500 | 0.9201 | 0.2472 | 0.1256 | 0.2063 | 0.2331 | 20.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ninagroot/Llama-450M
|
ninagroot
| 2024-04-22T11:40:39Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T09:01:23Z |
---
tags:
- generated_from_trainer
model-index:
- name: Llama-450M
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-450M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.6051 | 0.89 | 2 | 8.5427 |
| 8.1233 | 1.78 | 4 | 8.2081 |
| 7.2688 | 2.67 | 6 | 7.6786 |
| 6.3982 | 4.0 | 9 | 7.0782 |
| 5.8794 | 4.89 | 11 | 6.7779 |
| 5.4786 | 5.78 | 13 | 6.5717 |
| 4.994 | 6.67 | 15 | 6.3356 |
| 4.35 | 8.0 | 18 | 6.2257 |
| 3.9757 | 8.89 | 20 | 6.0451 |
| 3.4479 | 9.78 | 22 | 6.0242 |
| 3.1004 | 10.67 | 24 | 5.9219 |
| 2.5207 | 12.0 | 27 | 5.8224 |
| 2.1123 | 12.89 | 29 | 5.9286 |
| 1.7641 | 13.33 | 30 | 5.8986 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
relu-ntnu/bart-large-cnn_v4_trained_on_1000_lr_5e-5_r8_a16_all_layers
|
relu-ntnu
| 2024-04-22T11:40:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T11:40:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
theViolet/esm2_t30_150M_UR50D-finetuned-secondary-structure
|
theViolet
| 2024-04-22T11:37:45Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"esm",
"token-classification",
"generated_from_trainer",
"base_model:facebook/esm2_t30_150M_UR50D",
"base_model:finetune:facebook/esm2_t30_150M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-04-16T07:14:20Z |
---
license: mit
base_model: facebook/esm2_t30_150M_UR50D
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
model-index:
- name: esm2_t30_150M_UR50D-finetuned-secondary-structure
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t30_150M_UR50D-finetuned-secondary-structure
This model is a fine-tuned version of [facebook/esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0924
- Accuracy: 0.9829
- Mcc: 0.3696
- Recall: 0.3004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Mcc | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|
| No log | 1.0 | 128 | 0.0845 | 0.9844 | 0.2665 | 0.1033 |
| No log | 2.0 | 256 | 0.0815 | 0.9842 | 0.2316 | 0.0782 |
| No log | 3.0 | 384 | 0.0793 | 0.9852 | 0.3837 | 0.2325 |
| 0.0761 | 4.0 | 512 | 0.0842 | 0.9841 | 0.3854 | 0.2880 |
| 0.0761 | 5.0 | 640 | 0.0924 | 0.9829 | 0.3696 | 0.3004 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Niggendar/stylishpony_v10
|
Niggendar
| 2024-04-22T11:34:57Z | 115 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-04-22T11:29:39Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ABHISHEKMONU2001/llama3_8b_finetunning_22_April
|
ABHISHEKMONU2001
| 2024-04-22T11:27:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T11:26:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ransaka/gemma-2b-sinhala-translation-chatml
|
Ransaka
| 2024-04-22T11:24:32Z | 7 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-04-22T07:09:39Z |
---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: gemma-2b-sinhala-translation-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-sinhala-translation-chatml
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.2
|
wendy41/llama-2-koen-user0-100-nll
|
wendy41
| 2024-04-22T11:14:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T11:13:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bhutchings/q-Taxi-v3
|
bhutchings
| 2024-04-22T11:11:59Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-22T11:11:44Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bhutchings/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
chakkakrishna/Tinybest
|
chakkakrishna
| 2024-04-22T11:10:38Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"region:us"
] | null | 2024-04-22T11:07:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
SubhasishSaha/dqn-flappy-sb3
|
SubhasishSaha
| 2024-04-22T11:08:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"FlappyBird-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-21T11:14:43Z |
---
library_name: stable-baselines3
tags:
- FlappyBird-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FlappyBird-v0
type: FlappyBird-v0
metrics:
- type: mean_reward
value: -9.30 +/- 0.00
name: mean_reward
verified: false
---
# **DQN** Agent playing **FlappyBird-v0**
This is a trained model of a **DQN** agent playing **FlappyBird-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tensorride/Classifier_with_external_sets_02
|
Tensorride
| 2024-04-22T11:07:53Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-21T12:36:44Z |
---
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Classifier_with_external_sets_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Classifier_with_external_sets_02
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6968
- Accuracy: 0.5034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9983 | 289 | 0.6937 | 0.4966 |
| 0.6958 | 2.0 | 579 | 0.6958 | 0.4966 |
| 0.6958 | 2.9983 | 868 | 0.6972 | 0.4966 |
| 0.6845 | 4.0 | 1158 | 0.6931 | 0.5034 |
| 0.6845 | 4.9983 | 1447 | 0.7009 | 0.5034 |
| 0.6548 | 6.0 | 1737 | 0.7251 | 0.5034 |
| 0.6484 | 6.9983 | 2026 | 0.7186 | 0.5034 |
| 0.6484 | 8.0 | 2316 | 0.7049 | 0.5034 |
| 0.6453 | 8.9983 | 2605 | 0.6997 | 0.5034 |
| 0.6453 | 9.9827 | 2890 | 0.6968 | 0.5034 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
bhutchings/q-FrozenLake-v1-4x4-noSlippery
|
bhutchings
| 2024-04-22T11:05:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-22T11:04:50Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bhutchings/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tomaszki/llama-5-b
|
tomaszki
| 2024-04-22T11:05:46Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T10:37:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli
|
MoritzLaurer
| 2024-04-22T11:03:52Z | 65,606 | 37 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"nli",
"multilingual",
"en",
"ar",
"bg",
"de",
"el",
"es",
"fr",
"hi",
"ru",
"sw",
"th",
"tr",
"ur",
"vi",
"zh",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:2002.10957",
"arxiv:1809.05053",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-02-11T13:10:37Z |
---
language:
- multilingual
- en
- ar
- bg
- de
- el
- es
- fr
- hi
- ru
- sw
- th
- tr
- ur
- vi
- zh
license: mit
tags:
- zero-shot-classification
- text-classification
- nli
- pytorch
metrics:
- accuracy
datasets:
- multi_nli
- xnli
pipeline_tag: zero-shot-classification
widget:
- text: "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels: "politics, economy, entertainment, environment"
---
---
# Multilingual MiniLMv2-L6-mnli-xnli
## Model description
This multilingual model can perform natural language inference (NLI) on 100+ languages and is therefore also
suitable for multilingual zero-shot classification. The underlying multilingual-MiniLM-L6 model was created
by Microsoft and was distilled from XLM-RoBERTa-large (see details [in the original paper](https://arxiv.org/pdf/2002.10957.pdf)
and newer information in [this repo](https://github.com/microsoft/unilm/tree/master/minilm)).
The model was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages,
as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
The main advantage of distilled models is that they are smaller (faster inference, lower memory requirements) than their teachers (XLM-RoBERTa-large).
The disadvantage is that they lose some of the performance of their larger teachers.
For highest inference speed, I recommend using this 6-layer model. For higher performance I recommend
[mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) (as of 14.02.2023).
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the XNLI development dataset and the MNLI train dataset.
The XNLI development set consists of 2490 professionally translated texts from English
to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)).
Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages,
but due to quality issues with these machine translations, this model was only trained on the professional translations
from the XNLI development set and the original English MNLI training set (392 702 texts).
Not using machine translated texts can avoid overfitting the model to the 15 languages;
avoids catastrophic forgetting of the other languages it was pre-trained on;
and significantly reduces training costs.
### Training procedure
The model was trained using the Hugging Face trainer with the following hyperparameters.
The exact underlying model is [mMiniLMv2-L6-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large).
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=4e-05,
per_device_train_batch_size=64, # batch size per device during training
per_device_eval_batch_size=120, # batch size for evaluation
warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
)
```
### Eval results
The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total).
Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data
in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on
the other languages it was training on, but performance is most likely lower than for those languages available in XNLI.
The average XNLI performance of multilingual-MiniLM-L6 reported in the paper is 0.68 ([see table 11](https://arxiv.org/pdf/2002.10957.pdf)).
This reimplementation has an average performance of 0.713.
This increase in performance is probably thanks to the addition of MNLI in the training data and this model was distilled from
XLM-RoBERTa-large instead of -base (multilingual-MiniLM-L6-v2).
|Datasets|avg_xnli|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.713|0.687|0.742|0.719|0.723|0.789|0.748|0.741|0.691|0.714|0.642|0.699|0.696|0.664|0.723|0.721|
|Speed text/sec (A100 GPU, eval_batch=120)|6093.0|6210.0|6003.0|6053.0|5409.0|6531.0|6205.0|5615.0|5734.0|5970.0|6219.0|6289.0|6533.0|5851.0|5970.0|6798.0|
|Datasets|mnli_m|mnli_mm|
| :---: | :---: | :---: |
|Accuracy|0.782|0.8|
|Speed text/sec (A100 GPU, eval_batch=120)|4430.0|4395.0|
## Limitations and bias
Please consult the original paper and literature on different NLI datasets for potential biases.
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022.
‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’.
Preprint, June. Open Science Framework. https://osf.io/74b8k.
## Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
|
Mayurpai5/results
|
Mayurpai5
| 2024-04-22T10:41:37Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T10:36:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wendy41/llama-2-koen-user0-80-0419-2
|
wendy41
| 2024-04-22T10:39:02Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T05:10:09Z |
---
language:
- ko
license: llama2
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
#### Full License available at: https://huggingface.co/beomi/llama-2-koen-13b/blob/main/LICENSE
#### Dataset: Crawling
|
crazyup37/Llama3_8b_finetuned
|
crazyup37
| 2024-04-22T10:37:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T10:37:04Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** crazyup37
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
madelineoliver/ToolsBaer-EML-to-Hotmail-Importer
|
madelineoliver
| 2024-04-22T10:36:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-04-22T10:35:14Z |
Users can convert EML files to Hotmail account files in a very secure and efficient manner by using a professional solution - ToolsBaer EML to Hotmail Importer tool. It is now simpler to convert EML files with attachments to Hotmail account files in bulk. The user never loses any data when using this program for conversion. Even those who are not skilled can use the program with ease because of its easy UI. To use the programs, users don't need any other software. Windows 11, 10, 7, XP, Vista, 8, and 8.1 can all be used with this software. ToolsBaer EML to Hotmail Importer is available for download and full-use purchase.
Read More:- http://www.toolsbaer.com/eml-to-hotmail-importer/
|
UncleMoJo/corgy_dog_LoRA
|
UncleMoJo
| 2024-04-22T10:36:15Z | 9 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-04-09T20:24:10Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - UncleMoJo/corgy_dog_LoRA
<Gallery />
## Model description
These are UncleMoJo/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](UncleMoJo/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Rhma/mistral_7b_fine-tuned-kaggle
|
Rhma
| 2024-04-22T10:34:59Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-04-22T10:03:03Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
h2oai/h2o-danube2-1.8b-chat
|
h2oai
| 2024-04-22T10:32:16Z | 2,779 | 61 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"arxiv:2401.16818",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-05T12:20:11Z |
---
language:
- en
library_name: transformers
license: apache-2.0
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
pipeline_tag: text-generation
---
# Model Card
## Summary
h2o-danube2-1.8b-chat is a chat fine-tuned model by H2O.ai with 1.8 billion parameters. We release three versions of this model:
| Model Name | Description |
|:-----------------------------------------------------------------------------------|:----------------|
| [h2oai/h2o-danube2-1.8b-base](https://huggingface.co/h2oai/h2o-danube2-1.8b-base) | Base model |
| [h2oai/h2o-danube2-1.8b-sft](https://huggingface.co/h2oai/h2o-danube2-1.8b-sft) | SFT tuned |
| [h2oai/h2o-danube2-1.8b-chat](https://huggingface.co/h2oai/h2o-danube2-1.8b-chat) | SFT + DPO tuned |
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
## Model Architecture
We adjust the Llama 2 architecture for a total of around 1.8b parameters. For details, please refer to our [Technical Report](https://arxiv.org/abs/2401.16818). We use the Mistral tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 8,192.
The details of the model architecture are:
| Hyperparameter | Value |
|:----------------|:-------|
| n_layers | 24 |
| n_heads | 32 |
| n_query_groups | 8 |
| n_embd | 2560 |
| vocab size | 32000 |
| sequence length | 8192 |
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers>=4.39.3
```
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="h2oai/h2o-danube2-1.8b-chat",
torch_dtype=torch.bfloat16,
device_map="auto",
)
# We use the HF Tokenizer chat template to format each message
# https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Why is drinking water so healthy?"},
]
prompt = pipe.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
res = pipe(
prompt,
max_new_tokens=256,
)
print(res[0]["generated_text"])
```
This will apply and run the correct prompt format out of the box:
```
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 2560, padding_idx=0)
(layers): ModuleList(
(0-23): 24 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(in_features=2560, out_features=2560, bias=False)
(k_proj): Linear(in_features=2560, out_features=640, bias=False)
(v_proj): Linear(in_features=2560, out_features=640, bias=False)
(o_proj): Linear(in_features=2560, out_features=2560, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=2560, out_features=6912, bias=False)
(up_proj): Linear(in_features=2560, out_features=6912, bias=False)
(down_proj): Linear(in_features=6912, out_features=2560, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=2560, out_features=32000, bias=False)
)
```
## Benchmarks
### 🤗 Open LLM Leaderboard
| Benchmark | acc_n |
|:--------------|:--------:|
| Average | 48.44 |
| ARC-challenge | 43.43 |
| Hellaswag | 73.54 |
| MMLU | 37.77 |
| TruthfulQA | 39.96 |
| Winogrande | 69.77 |
| GSM8K | 26.16 |
### MT-Bench
```
First Turn: 6.23
Second Turn: 5.34
Average: 5.79
```

## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
relu-ntnu/bart-large-cnn_v4_trained_on_100_lr_5e-5_r8_a16_all_layers
|
relu-ntnu
| 2024-04-22T10:31:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T10:31:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
srirambhavadish/Mixtral_arxt_v2
|
srirambhavadish
| 2024-04-22T10:31:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T10:29:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CaasiHUANG/flames-scorer
|
CaasiHUANG
| 2024-04-22T10:31:11Z | 43 | 1 |
transformers
|
[
"transformers",
"pytorch",
"internlm",
"text-classification",
"custom_code",
"zh",
"arxiv:2311.06899",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-09T09:15:01Z |
---
language:
- zh
metrics:
- accuracy
- recall
- precision
library_name: transformers
pipeline_tag: text-classification
---
# Flames-scorer
This is the specified scorer for Flames benchmark – a highly adversarial benchmark in Chinese for LLM's value alignment evaluation.
For more detail, please refer to our [paper](https://arxiv.org/abs/2311.06899) and [Github repo](https://github.com/AIFlames/Flames/tree/main)
## Model Details
* Developed by: Shanghai AI Lab and Fudan NLP Group.
* Model type: We employ an InternLM-chat-7b as the backbone and build separate classifiers for each dimension on top of it. Then, we apply a multi-task training approach to train the scorer.
* Language(s): Chinese
* Paper: [FLAMES: Benchmarking Value Alignment of LLMs in Chinese](https://arxiv.org/abs/2311.06899)
* Contact: For questions and comments about the model, please email tengyan@pjlab.org.cn.
## Usage
The environment can be set up as:
```shell
$ pip install -r requirements.txt
```
And you can use `infer.py` to evaluate your model:
```shell
python infer.py --data_path YOUR_DATA_FILE.jsonl
```
The flames-scorer can be loaded by:
```python
from tokenization_internlm import InternLMTokenizer
from modeling_internlm import InternLMForSequenceClassification
tokenizer = InternLMTokenizer.from_pretrained("CaasiHUANG/flames-scorer", trust_remote_code=True)
model = InternLMForSequenceClassification.from_pretrained("CaasiHUANG/flames-scorer", trust_remote_code=True)
```
Please note that:
1. Ensure each entry in `YOUR_DATA_FILE.jsonl` includes the fields: "dimension", "prompt", and "response".
2. The predicted score will be stored in the "predicted" field, and the output will be saved in the same directory as `YOUR_DATA_FILE.jsonl`.
3. The accuracy of the Flames-scorer on out-of-distribution prompts (i.e., prompts not included in the Flames-prompts) has not been evaluated. Consequently, its predictions for such data may not be reliable.
|
relu-ntnu/bart-large-cnn_v4_trained_on_50_lr_5e-5_r8_a16_all_layers
|
relu-ntnu
| 2024-04-22T10:29:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T10:29:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SiLamine/ppo-LunarLander-v2
|
SiLamine
| 2024-04-22T10:29:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-22T10:26:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.07 +/- 17.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
relu-ntnu/bart-large-cnn_v4_trained_on_25_lr_5e-5_r8_a16_all_layers
|
relu-ntnu
| 2024-04-22T10:28:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T10:27:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SeaLLMs/SeaLMMM-7B-v0.1
|
SeaLLMs
| 2024-04-22T10:26:22Z | 18 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-text-to-text",
"multilingual",
"sea",
"conversational",
"en",
"zh",
"vi",
"id",
"th",
"ms",
"km",
"lo",
"my",
"tl",
"arxiv:2312.00738",
"arxiv:2306.05179",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-04-22T05:11:21Z |
---
license: other
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
language:
- en
- zh
- vi
- id
- th
- ms
- km
- lo
- my
- tl
tags:
- multilingual
- sea
---
<p align="center">
<img src="sealmmm.png" width="200" />
</p>
> SeaLLM will be able to "see"!
# *SeaLMMM-7B* - Large Multilingual Multimodal Models for Southeast Asia
<p align="center">
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/SeaLLMs/SeaLMMM-7B-v0.1" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a>
</p>
<!-- 🔥<span style="color: #ff3860">[HOT]</span> SeaLLMs project now has a dedicated website - [damo-nlp-sg.github.io/SeaLLMs](https://damo-nlp-sg.github.io/SeaLLMs/) -->
We introduce and [showcase](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B) the first iteration of [SeaLMMM](https://huggingface.co/SeaLLMs/SeaLMMM-7B-v0.1) -- A unified multilingual and multimodal that excel in both text-only and vision tasks in multiple languages spoken in Southeast Asia.
### SeaLMMM-7B abilities
* SeaLMMM-7B is one of the strongest 7B vision-language models at **text-only tasks**, with performance similar to [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2). It is a text-first-vision-second model.
* SeaLMMM-7B **is** able to handle most SEA languages, making it more multilingual than En-only LLava, Bilingual (En+Zh) Qwen-VL or Yi-VL.
* Unlike LLava or specialized VLMs, which demand only one image at the begining, SeaLMMM-7B can seamlessly handle text-only conversations at the begining and visual instructions in the middle of the conversations and support topic and language switching.
* SeaLMMM-7B can carry multi-image generation or in-context visual learning, in which case, [Better llava next](https://github.com/huggingface/transformers/pull/29850) should be applied to enable such feature.
### Release and DEMO
- DEMO: [SeaLLMs/SeaLLM-7b](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B).
- Model weights:
- [SeaLMMM-7B-v0.1](https://huggingface.co/SeaLLMs/SeaLMMM-7B-v0.1).
- Explore SeaLLMs:
- [SeaLLMs/SeaLLM-7B-v2.5](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5).
- [SeaLLMs/SeaLLM-7B-v2](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2).
- [SeaLLMs/SeaLLM-7B-v1](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v1).
<blockquote style="color:red">
<p><strong style="color: red">Terms of Use and License</strong>:
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
</blockquote>
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
> The logo was generated by DALL-E 3.
## Overview
SeaLMMM-7B-v0.1 is a multimodal extension of [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2).
It adopts the [Llava-1.6](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) (Llava-NEXT) architecture.
It is trained by jointly train SeaLLM's multilingual text-only datasets along with Llava-1.5 English-only vision data, as well as in-house synthetically generated multilingual multimodal vision data and open-source data, such as [ThaiIDCardSynt](https://huggingface.co/datasets/matichon/ThaiIDCardSynt).
### English Vision QA Tasks
| Multimodal Models | VQA2 | GQA | Vizwiz | SQA-IMG | TextQA
| --- | --- | --- | --- | --- | --- |
| Qwen-VL-Chat | 78.20 | 57.50 | 38.90 | 68.20 | 61.50
| Llava-1.5-7b | 78.50 | 62.00 | 50.00 | 66.80 | 58.20
| Llava-1.5-13b | 80.00 | 63.30 | 53.60 | 71.60 | 61.30
| [SeaLMMM-7B-v0.1](https://huggingface.co/SeaLLMs/SeaLMMM-7B-v0.1) | 80.14 | 61.58 | 58.00 | 71.79 | 63.47
### Multilingual Text-only World Knowledge
We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th.
On text-only benchmarks, [SeaLMMM-7B-v0.1](https://huggingface.co/SeaLLMs/SeaLMMM-7B-v0.1) is generally on-par with [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) - its base LLM model. This demonstrates that our multimodal training regime does not vastly degrade text-only performance.
| Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Id<br>M3e | Th<br>M3e
|-----| ----- | --- | -- | ----- | ---- | --- | --- |
| GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 49.27 | 37.41
| Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 36.49 | 25.27
| Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 24.29 | 20.25
| SailorLM | Multi | 52.72 | 59.76 | 67.74 | 50.14 | 39.53 | 37.73
| [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 42.25 | 35.52
| [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5) | Multi | 64.05 | 76.87 | 62.54 | 63.11 | 48.64 | 46.86
| ---
| [SeaLMMM-7B-v0.1](https://huggingface.co/SeaLLMs/SeaLMMM-7B-v0.1) | Multi | 60.31 | 70.43 | 52.78 | 50.47 | 42.37 | 33.53
## Multilingual Multimodal Showcases
[SeaLMMM-7B-v0.1](https://huggingface.co/SeaLLMs/SeaLMMM-7B-v0.1) has better vision understanding and solving abilities in languages beyond English and Chinese, especially SEA languages, such as Vietnamese and Indonesian.

Image: find "x" in Vietnamese. Left: Llava-1.6-34B. Right: SeaLMMM-7B-v0.1.
<div class="row" style="display: flex; clear: both;">
<img src="llava_1.6_34b_find_x_vi.png" alt="Forest" style="float: left; width: 39%">
<img src="find_x_vi.png" alt="Snow" style="float: left; width: 59%">
</div>
### Limitations
* Despite being multilingual, SeaLMMM-7B-v0.1 multi-modal capabilities still work best in English, while we're working to improve it in other languages.
* For OCR, it can only read English.
* SeaLMMM-7B-v0.1 sometimes still think it cannot process image in multi-turn setting, due to existing text-only SFT, future versions fill fix this.
* Multi-modal multi-turn capabilities are still limited.
### Usage
#### Instruction format
**Unlike others, image token is `<|image|>`**
```python
prompt = """<|im_start|>system
You are a helpful assistant.</s>
<|im_start|>user
<|image|>
What is in the image?</s>
<|im_start|>assistant
There is 2 cats in the image.</s>"""
# <|im_start|> is not a special token.
# Transformers chat_template should be consistent with vLLM format below.
# ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))
"""
```
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [l.bing@alibaba-inc.com](mailto:l.bing@alibaba-inc.com)
**Author list and order will change!**
* `*` and `^` are equal contributions.
```
@article{damonlpsg2023seallm,
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Weiwen Xu, Hou Pong Chan,
Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang,
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
Chaoqun Liu, Hang Zhang, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = 2023,
Eprint = {arXiv:2312.00738},
}
```
|
relu-ntnu/bart-large-cnn_v4_trained_on_5_lr_5e-5_r8_a16_all_layers
|
relu-ntnu
| 2024-04-22T10:25:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T10:25:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hadifar/eventclassifier
|
hadifar
| 2024-04-22T10:21:26Z | 57 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-05T11:51:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ratna1704/vit-base-patch16-224-in21k-finetuned-lora-food101
|
Ratna1704
| 2024-04-22T10:20:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T08:18:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kuchidareo/detr
|
kuchidareo
| 2024-04-22T10:20:00Z | 188 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-04-21T09:19:03Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 313 | 5.5361 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
deadcode99/mistral-7b-finetuned-lime-v1
|
deadcode99
| 2024-04-22T10:18:55Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T10:02:46Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: mistral-7b-finetuned-lime-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-finetuned-lime-v1
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4064 | 0.95 | 10 | 1.4881 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Akirami/llama3-8b-orpo-eg1
|
Akirami
| 2024-04-22T10:18:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T10:17:47Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Akirami
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
syedmohiuddinzia/my_awesome_mind_model
|
syedmohiuddinzia
| 2024-04-22T10:17:52Z | 160 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-04-22T07:42:23Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.035398230088495575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6442
- Accuracy: 0.0354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6357 | 0.0885 |
| No log | 1.87 | 7 | 2.6412 | 0.0265 |
| 2.6386 | 2.93 | 11 | 2.6440 | 0.0354 |
| 2.6386 | 4.0 | 15 | 2.6423 | 0.0354 |
| 2.6386 | 4.8 | 18 | 2.6423 | 0.0354 |
| 2.6277 | 5.87 | 22 | 2.6438 | 0.0354 |
| 2.6277 | 6.93 | 26 | 2.6446 | 0.0354 |
| 2.6163 | 8.0 | 30 | 2.6442 | 0.0354 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
Ahmedkkh/homg
|
Ahmedkkh
| 2024-04-22T10:11:29Z | 0 | 0 | null |
[
"doi:10.57967/hf/2103",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-22T10:06:12Z |
---
license: creativeml-openrail-m
---
|
farhadali/Mistral-7B-Instruct-v0.2_Sparql
|
farhadali
| 2024-04-22T10:11:10Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-21T21:34:58Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
model-index:
- name: Mistral-7B-Instruct-v0.2_Sparql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2_Sparql
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.14.7
- Tokenizers 0.19.1
|
ingmarnitze/thaw-slump-segmentation
|
ingmarnitze
| 2024-04-22T10:07:06Z | 0 | 0 | null |
[
"remote sensing",
"segmentation",
"permafrost",
"retrogressive thaw slumps",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2024-03-08T16:16:54Z |
---
license: mit
pipeline_tag: image-segmentation
tags:
- remote sensing
- segmentation
- permafrost
- retrogressive thaw slumps
---
# General Info
## Classes
1: retrogressive thaw slumps
## Input data
## Examples
|
latif98/videomae-base-finetuned-isl-numbers_2
|
latif98
| 2024-04-22T10:05:45Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:latif98/videomae-base-finetuned-isl-numbers",
"base_model:finetune:latif98/videomae-base-finetuned-isl-numbers",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-04-21T11:45:23Z |
---
license: cc-by-nc-4.0
base_model: latif98/videomae-base-finetuned-isl-numbers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-isl-numbers_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-isl-numbers_2
This model is a fine-tuned version of [latif98/videomae-base-finetuned-isl-numbers](https://huggingface.co/latif98/videomae-base-finetuned-isl-numbers) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2759
- Accuracy: 0.6839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3800
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.5502 | 0.02 | 76 | 3.4251 | 0.0945 |
| 3.1152 | 1.02 | 152 | 3.0364 | 0.2756 |
| 2.6365 | 2.02 | 228 | 2.6197 | 0.3780 |
| 2.3879 | 3.02 | 304 | 2.1519 | 0.4646 |
| 1.9396 | 4.02 | 380 | 2.0804 | 0.4173 |
| 1.9285 | 5.02 | 456 | 1.9335 | 0.4488 |
| 1.5843 | 6.02 | 532 | 1.7907 | 0.4803 |
| 1.2387 | 7.02 | 608 | 1.8962 | 0.3858 |
| 1.2578 | 8.02 | 684 | 1.7191 | 0.4488 |
| 0.9611 | 9.02 | 760 | 1.7362 | 0.4882 |
| 0.9247 | 10.02 | 836 | 1.3898 | 0.5906 |
| 0.8107 | 11.02 | 912 | 1.9588 | 0.4094 |
| 0.7618 | 12.02 | 988 | 1.1416 | 0.6614 |
| 0.7083 | 13.02 | 1064 | 1.2812 | 0.6614 |
| 0.7098 | 14.02 | 1140 | 1.4601 | 0.5197 |
| 0.4601 | 15.02 | 1216 | 1.1276 | 0.6693 |
| 0.5684 | 16.02 | 1292 | 1.4792 | 0.5591 |
| 0.5044 | 17.02 | 1368 | 1.1236 | 0.6614 |
| 0.4551 | 18.02 | 1444 | 1.3894 | 0.6063 |
| 0.3488 | 19.02 | 1520 | 1.2918 | 0.6614 |
| 0.4711 | 20.02 | 1596 | 1.2510 | 0.6299 |
| 0.3451 | 21.02 | 1672 | 1.1265 | 0.6693 |
| 0.394 | 22.02 | 1748 | 1.1676 | 0.6378 |
| 0.234 | 23.02 | 1824 | 1.0714 | 0.7087 |
| 0.2318 | 24.02 | 1900 | 1.2647 | 0.6378 |
| 0.4294 | 25.02 | 1976 | 1.0250 | 0.7480 |
| 0.2084 | 26.02 | 2052 | 1.1361 | 0.6850 |
| 0.1724 | 27.02 | 2128 | 0.8791 | 0.7402 |
| 0.1715 | 28.02 | 2204 | 0.7549 | 0.7559 |
| 0.2719 | 29.02 | 2280 | 0.7708 | 0.7717 |
| 0.2021 | 30.02 | 2356 | 1.1394 | 0.7165 |
| 0.0999 | 31.02 | 2432 | 0.7838 | 0.7717 |
| 0.1473 | 32.02 | 2508 | 1.3809 | 0.6457 |
| 0.0939 | 33.02 | 2584 | 0.7839 | 0.7874 |
| 0.0952 | 34.02 | 2660 | 1.0636 | 0.7008 |
| 0.2684 | 35.02 | 2736 | 0.9194 | 0.7323 |
| 0.1628 | 36.02 | 2812 | 0.7346 | 0.8031 |
| 0.0584 | 37.02 | 2888 | 1.0112 | 0.7323 |
| 0.0567 | 38.02 | 2964 | 1.0584 | 0.7323 |
| 0.1358 | 39.02 | 3040 | 1.0566 | 0.7323 |
| 0.0796 | 40.02 | 3116 | 0.9323 | 0.7480 |
| 0.0828 | 41.02 | 3192 | 0.7611 | 0.7953 |
| 0.0661 | 42.02 | 3268 | 0.7284 | 0.7874 |
| 0.0882 | 43.02 | 3344 | 0.6982 | 0.7953 |
| 0.0398 | 44.02 | 3420 | 0.8586 | 0.7717 |
| 0.2085 | 45.02 | 3496 | 0.7990 | 0.7717 |
| 0.0509 | 46.02 | 3572 | 0.7134 | 0.8268 |
| 0.0791 | 47.02 | 3648 | 0.6887 | 0.8189 |
| 0.0469 | 48.02 | 3724 | 0.7159 | 0.8031 |
| 0.0621 | 49.02 | 3800 | 0.7062 | 0.8031 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
Niggendar/omegaPonyXLAnime_v10
|
Niggendar
| 2024-04-22T10:01:57Z | 125 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-04-22T09:56:52Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HWatervalley/TiToHe_mistral_model
|
HWatervalley
| 2024-04-22T10:00:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T22:21:32Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
myzz/sad
|
myzz
| 2024-04-22T09:55:27Z | 0 | 0 | null |
[
"zh",
"license:mit",
"region:us"
] | null | 2023-12-17T06:31:05Z |
---
license: mit
language: zh
---
|
qqq121/videomae-base-finetuned-ucf101-subset
|
qqq121
| 2024-04-22T09:55:22Z | 63 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-04-22T05:51:52Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2344
- Accuracy: 0.9143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7279 | 0.25 | 75 | 1.3109 | 0.5857 |
| 1.0543 | 1.25 | 150 | 0.5988 | 0.8143 |
| 0.3549 | 2.25 | 225 | 0.3976 | 0.8143 |
| 0.2314 | 3.25 | 300 | 0.2344 | 0.9143 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
allknowingroger/Llama3merge8-15B-MoE
|
allknowingroger
| 2024-04-22T09:53:46Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"sethuiyer/Medichat-Llama3-8B",
"psyche/llama3-8b-instruct-mrc-v0.3",
"conversational",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:finetune:sethuiyer/Medichat-Llama3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T09:46:07Z |
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- sethuiyer/Medichat-Llama3-8B
- psyche/llama3-8b-instruct-mrc-v0.3
base_model:
- sethuiyer/Medichat-Llama3-8B
- psyche/llama3-8b-instruct-mrc-v0.3
---
# Llama3merge8-15B-MoE
Llama3merge8-15B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B)
* [psyche/llama3-8b-instruct-mrc-v0.3](https://huggingface.co/psyche/llama3-8b-instruct-mrc-v0.3)
## 🧩 Configuration
```yaml
base_model: sethuiyer/Medichat-Llama3-8B
experts:
- source_model: sethuiyer/Medichat-Llama3-8B
positive_prompts: ["medical"]
- source_model: psyche/llama3-8b-instruct-mrc-v0.3
positive_prompts: ["what"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Llama3merge8-15B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
michaelw37/sc63
|
michaelw37
| 2024-04-22T09:53:16Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T09:51:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EpicJhon/llama
|
EpicJhon
| 2024-04-22T09:39:23Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T09:33:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
StarGazer-Media/pokemon-lora-manoj
|
StarGazer-Media
| 2024-04-22T09:39:02Z | 6 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-04-22T05:26:55Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - StarGazer-Media/pokemon-lora-MANOJ
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the akadhim-ai/martin_valen_dataset dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
adriansanz/ModeloFT1-v1
|
adriansanz
| 2024-04-22T09:38:38Z | 179 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:projecte-aina/roberta-base-ca-v2-cawikitc",
"base_model:finetune:projecte-aina/roberta-base-ca-v2-cawikitc",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-22T09:38:13Z |
---
license: apache-2.0
base_model: projecte-aina/roberta-base-ca-v2-cawikitc
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: stocks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stocks
This model is a fine-tuned version of [projecte-aina/roberta-base-ca-v2-cawikitc](https://huggingface.co/projecte-aina/roberta-base-ca-v2-cawikitc) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6639
- Accuracy: 0.7637
- Precision: 0.5304
- Recall: 0.4710
- F1: 0.4778
- Ratio: 0.7903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- lr_scheduler_warmup_steps: 4
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| 0.8468 | 0.07 | 10 | 0.8350 | 0.6185 | 0.3093 | 0.5 | 0.3822 | 1.0 |
| 0.7922 | 0.14 | 20 | 0.8314 | 0.6185 | 0.3093 | 0.5 | 0.3822 | 1.0 |
| 0.8005 | 0.21 | 30 | 0.8059 | 0.6169 | 0.2060 | 0.3325 | 0.2544 | 0.9984 |
| 0.8038 | 0.28 | 40 | 0.7907 | 0.6185 | 0.3093 | 0.5 | 0.3822 | 1.0 |
| 0.7846 | 0.34 | 50 | 0.8060 | 0.6185 | 0.3093 | 0.5 | 0.3822 | 1.0 |
| 0.7539 | 0.41 | 60 | 0.7573 | 0.6274 | 0.5024 | 0.3422 | 0.2763 | 0.9847 |
| 0.725 | 0.48 | 70 | 0.8018 | 0.7435 | 0.4978 | 0.4906 | 0.4940 | 0.5847 |
| 0.6842 | 0.55 | 80 | 0.8437 | 0.7419 | 0.5035 | 0.4795 | 0.4901 | 0.6444 |
| 0.7415 | 0.62 | 90 | 0.7783 | 0.7468 | 0.5006 | 0.4832 | 0.4909 | 0.6444 |
| 0.6303 | 0.69 | 100 | 0.7194 | 0.7452 | 0.5009 | 0.4723 | 0.4808 | 0.7040 |
| 0.6844 | 0.76 | 110 | 0.7137 | 0.7702 | 0.5106 | 0.4996 | 0.5044 | 0.6468 |
| 0.699 | 0.83 | 120 | 0.6666 | 0.7806 | 0.5159 | 0.5039 | 0.5084 | 0.6653 |
| 0.7229 | 0.9 | 130 | 0.6636 | 0.7629 | 0.5233 | 0.4730 | 0.4799 | 0.775 |
| 0.6555 | 0.97 | 140 | 0.6646 | 0.7637 | 0.5312 | 0.4707 | 0.4775 | 0.7919 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
NMutangana/whisper-small-swahili
|
NMutangana
| 2024-04-22T09:38:27Z | 81 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-21T21:53:14Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-small-swahili
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: sw
split: None
args: sw
metrics:
- name: Wer
type: wer
value: 34.94378922684323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-swahili
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6821
- Wer: 34.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3797 | 1.5625 | 1000 | 0.6029 | 40.2361 |
| 0.1129 | 3.125 | 2000 | 0.5886 | 35.5841 |
| 0.0507 | 4.6875 | 3000 | 0.6397 | 35.4404 |
| 0.0161 | 6.25 | 4000 | 0.6821 | 34.9438 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
GodsonNtungi/swahilillama3-8b
|
GodsonNtungi
| 2024-04-22T09:36:06Z | 95 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"sw",
"dataset:mwitiderrick/SwahiliAlpaca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T08:17:11Z |
---
language:
- sw
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: llama-3-8b
datasets: mwitiderrick/SwahiliAlpaca
pipeline_tag: text-generation
---
# Swahili llama 3 8b
- **Developed by:** GodsonNtungi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
An experimental model with poor performing results, but a great start
**training run** : 1 epoch \
**time**: 9 hours : 20 mins : 07 seconds\
**training loss**: 0.8683
**PEFT parameters**
```python
model = FastLanguageModel.get_peft_model(
model,
r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 16,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
)
```
**Weakness** \
The model is not properly finetuned to generate end of text token when needed , hence great results start followed by gibberish depending on max token limit set
|
ian00000/Mistral-7B_offensive_finetuned2
|
ian00000
| 2024-04-22T09:34:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-04-22T09:34:37Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
Gokul29/intent_recognition
|
Gokul29
| 2024-04-22T09:32:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T09:31:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saransh03sharma/mintrec2-mistral-2-7b-200
|
saransh03sharma
| 2024-04-22T09:30:22Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T09:26:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kpi/news-extract-paritial-516
|
kpi
| 2024-04-22T09:25:18Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T08:20:35Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** kpi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kgr20/AnnualSummarizer
|
Kgr20
| 2024-04-22T09:21:03Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-22T03:28:52Z |
---
tags:
- generated_from_trainer
model-index:
- name: AnnualSummarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AnnualSummarizer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
lenate/lenate_model_12_albert-base-v2
|
lenate
| 2024-04-22T09:19:15Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-22T09:12:37Z |
---
license: apache-2.0
base_model: albert/albert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: lenate_model_12_albert-base-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lenate_model_12_albert-base-v2
This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5494
- Accuracy: 0.7622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 355 | 0.6467 | 0.7212 |
| 0.7746 | 2.0 | 710 | 0.5847 | 0.7241 |
| 0.5448 | 3.0 | 1065 | 0.5494 | 0.7622 |
| 0.5448 | 4.0 | 1420 | 0.6416 | 0.7368 |
| 0.3705 | 5.0 | 1775 | 0.6439 | 0.7735 |
| 0.2112 | 6.0 | 2130 | 0.8791 | 0.7643 |
| 0.2112 | 7.0 | 2485 | 1.1350 | 0.7657 |
| 0.1012 | 8.0 | 2840 | 1.3247 | 0.7721 |
| 0.0294 | 9.0 | 3195 | 1.4469 | 0.7699 |
| 0.0112 | 10.0 | 3550 | 1.4783 | 0.7699 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
atsuki-yamaguchi/bloom-1b1-focus-ja
|
atsuki-yamaguchi
| 2024-04-22T09:13:11Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T12:41:48Z |
---
license: mit
language: ja
---
BLOOM-1B Japanese [LAPT + FOCUS]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-focus-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-focus-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-focus-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
jpodivin/Meta-Llama-Guard-2-8B-GGUF
|
jpodivin
| 2024-04-22T09:08:24Z | 47 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"llama-3",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-Guard-2-8B",
"base_model:quantized:meta-llama/Meta-Llama-Guard-2-8B",
"license:other",
"region:us",
"conversational"
] |
text-generation
| 2024-04-21T20:14:00Z |
---
base_model: meta-llama/Meta-Llama-Guard-2-8B
library_name: transformers
pipeline_tag: text-generation
model_creator: meta-llama
model_name: Meta-Llama-Guard-2-8B
model_type: Llama3
inference: false
license: other
language:
- en
tags:
- llama
- llama-3
---
# Meta-Llama-Guard-2-8B-GGUF
Quantized Meta-Llama-Guard-2-8B models using recent versions of llama.cpp.
|
atsuki-yamaguchi/Mistral-7B-v0.1-focus-ja
|
atsuki-yamaguchi
| 2024-04-22T09:05:48Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T16:44:24Z |
---
license: mit
language: ja
---
Mistral-7B Japanese [LAPT + FOCUS]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-focus-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-focus-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-focus-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-ja
|
atsuki-yamaguchi
| 2024-04-22T09:05:45Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T15:02:46Z |
---
license: mit
language: ja
---
Mistral-7B Japanese [LAPT + Heuristics]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/Mistral-7B-v0.1-random-ar
|
atsuki-yamaguchi
| 2024-04-22T09:05:33Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T17:31:26Z |
---
license: mit
language: ar
---
Mistral-7B Arabic [LAPT + Random]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-random-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-random-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/tigerbot-7b-base-heuristics-ja
|
atsuki-yamaguchi
| 2024-04-22T09:05:02Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T13:52:31Z |
---
license: mit
language: ja
---
TigerBot-7B Japanese [LAPT + Heuristics]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-heuristics-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-heuristics-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-heuristics-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
ripaaiii/fine-tune-C1-revised
|
ripaaiii
| 2024-04-22T09:04:55Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-04-20T20:40:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atsuki-yamaguchi/tigerbot-7b-base-clp-ar
|
atsuki-yamaguchi
| 2024-04-22T09:04:54Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T17:07:14Z |
---
license: mit
language: ar
---
TigerBot-7B Arabic [LAPT + CLP]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-clp-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-clp-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/tigerbot-7b-base-random-ar
|
atsuki-yamaguchi
| 2024-04-22T09:04:53Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T17:01:19Z |
---
license: mit
language: ar
---
TigerBot-7B Arabic [LAPT + Random]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-random-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-random-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
xhxiao/sd-class-butterflies-32
|
xhxiao
| 2024-04-22T09:04:28Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-04-22T09:04:17Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('xhxiao/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Shaleen123/kandinsky-2.2-test
|
Shaleen123
| 2024-04-22T09:04:21Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"kandinsky",
"license:apache-2.0",
"diffusers:KandinskyV22Pipeline",
"region:us"
] |
text-to-image
| 2024-04-22T08:56:49Z |
---
license: apache-2.0
prior:
- kandinsky-community/kandinsky-2-2-prior
tags:
- text-to-image
- kandinsky
inference: false
---
# Kandinsky 2.2
Kandinsky inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas.
It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov)
## Usage
Kandinsky 2.2 is available in diffusers!
```python
pip install diffusers transformers accelerate
```
### Text to image
```python
from diffusers import AutoPipelineForText2Image
import torch
pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "portrait of a young women, blue eyes, cinematic"
negative_prompt = "low quality, bad quality"
image = pipe(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale =1.0, height=768, width=768).images[0]
image.save("portrait.png")
```

### Text Guided Image-to-Image Generation
```python
from PIL import Image
import requests
from io import BytesIO
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
original_image = Image.open(BytesIO(response.content)).convert("RGB")
original_image = original_image.resize((768, 512))
```

```python
from diffusers import AutoPipelineForImage2Image
import torch
pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"
image = pipe(prompt=prompt, image=original_image, strength=0.3, height=768, width=768).images[0]
out.images[0].save("fantasy_land.png")
```

### Interpolate
```python
from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline
from diffusers.utils import load_image
import PIL
import torch
pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
img1 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)
img2 = load_image(
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/starry_night.jpeg"
)
# add all the conditions we want to interpolate, can be either text or image
images_texts = ["a cat", img1, img2]
# specify the weights for each condition in images_texts
weights = [0.3, 0.3, 0.4]
# We can leave the prompt empty
prompt = ""
prior_out = pipe_prior.interpolate(images_texts, weights)
pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(**prior_out, height=768, width=768).images[0]
image.save("starry_cat.png")
```

## Model Architecture
### Overview
Kandinsky 2.2 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder.
The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation.
<p float="left">
<img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/>
</p>
Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [CLIP-ViT-G model](https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K). The trained image prior model is then used to generate CLIP image embeddings for input text prompts. Both the input text prompts and its CLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image.
### Details
The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution).
The main Text2Image diffusion model was trained on [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) and then fine-tuned with a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources.
The main change in Kandinsky 2.2 is the replacement of CLIP-ViT-G. Its image encoder significantly increases the model's capability to generate more aesthetic pictures and better understand text, thus enhancing its overall performance.
Due to the switch CLIP model, the image prior model was retrained, and the Text2Image diffusion model was fine-tuned for 2000 iterations. Kandinsky 2.2 was trained on data of various resolutions, from 512 x 512 to 1536 x 1536, and also as different aspect ratios. As a result, Kandinsky 2.2 can generate 1024 x 1024 outputs with any aspect ratio.
### Evaluation
We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID.
FID metric values for generative models on COCO_30k
| | FID (30k)|
|:------|----:|
| eDiff-I (2022) | 6.95 |
| Image (2022) | 7.27 |
| Kandinsky 2.1 (2023) | 8.21|
| Stable Diffusion 2.1 (2022) | 8.59 |
| GigaGAN, 512x512 (2023) | 9.09 |
| DALL-E 2 (2022) | 10.39 |
| GLIDE (2022) | 12.24 |
| Kandinsky 1.0 (2022) | 15.40 |
| DALL-E (2021) | 17.89 |
| Kandinsky 2.0 (2022) | 20.00 |
| GLIGEN (2022) | 21.04 |
For more information, please refer to the upcoming technical report.
## BibTex
If you find this repository useful in your research, please cite:
```
@misc{kandinsky 2.2,
title = {kandinsky 2.2},
author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov},
year = {2023},
howpublished = {},
}
```
|
atsuki-yamaguchi/bloom-7b1-clp-ja
|
atsuki-yamaguchi
| 2024-04-22T09:04:08Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T16:42:16Z |
---
license: mit
language: ja
---
BLOOM-7B Japanese [LAPT + CLP]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clp-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clp-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clp-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-random-ja
|
atsuki-yamaguchi
| 2024-04-22T09:04:06Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T16:37:48Z |
---
license: mit
language: ja
---
BLOOM-7B Japanese [LAPT + Random]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-random-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-random-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-random-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-heuristics-ja
|
atsuki-yamaguchi
| 2024-04-22T09:04:03Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T12:49:42Z |
---
license: mit
language: ja
---
BLOOM-7B Japanese [LAPT + Heuristics]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-heuristics-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-heuristics-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-heuristics-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-random-ar
|
atsuki-yamaguchi
| 2024-04-22T09:03:54Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T16:34:28Z |
---
license: mit
language: ar
---
BLOOM-7B Arabic [LAPT + Random]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-random-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-random-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-focus-ar
|
atsuki-yamaguchi
| 2024-04-22T09:03:52Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T16:29:41Z |
---
license: mit
language: ar
---
BLOOM-7B Arabic [LAPT + FOCUS]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-focus-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-focus-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-heuristics-ar
|
atsuki-yamaguchi
| 2024-04-22T09:03:50Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T12:20:58Z |
---
license: mit
language: ar
---
BLOOM-7B Arabic [LAPT + Heuristics]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-heuristics-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-heuristics-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-clpp-ar
|
atsuki-yamaguchi
| 2024-04-22T09:03:49Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T12:13:25Z |
---
license: mit
language: ar
---
BLOOM-7B Arabic [LAPT + CLP+]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clpp-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clpp-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-heuristics-de
|
atsuki-yamaguchi
| 2024-04-22T09:03:40Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"de",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T12:36:35Z |
---
license: mit
language: de
---
BLOOM-7B German [LAPT + Heuristics]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-heuristics-de"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-heuristics-de"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-heuristics-de",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-clpp-de
|
atsuki-yamaguchi
| 2024-04-22T09:03:39Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"de",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T12:28:56Z |
---
license: mit
language: de
---
BLOOM-7B German [LAPT + CLP+]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clpp-de"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clpp-de"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-clpp-de",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-1b1-clp-ja
|
atsuki-yamaguchi
| 2024-04-22T09:03:30Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T11:17:19Z |
---
license: mit
language: ja
---
BLOOM-1B Japanese [LAPT + CLP]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-clp-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-clp-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-clp-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-1b1-clpp-ja
|
atsuki-yamaguchi
| 2024-04-22T09:03:24Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T12:43:51Z |
---
license: mit
language: ja
---
BLOOM-1B Japanese [LAPT + CLP+]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-clpp-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-clpp-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-clpp-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-1b1-random-ar
|
atsuki-yamaguchi
| 2024-04-22T09:03:21Z | 161 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T12:17:23Z |
---
license: mit
language: ar
---
BLOOM-1B Arabic [LAPT + Random]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-random-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-random-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-1b1-clpp-ar
|
atsuki-yamaguchi
| 2024-04-22T09:03:17Z | 166 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T12:24:34Z |
---
license: mit
language: ar
---
BLOOM-1B Arabic [LAPT + CLP+]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-clpp-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-1b1-clpp-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-untied-ja
|
atsuki-yamaguchi
| 2024-04-22T09:01:13Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T18:04:25Z |
---
license: mit
language: ja
---
Mistral-7B Japanese [LAPT + Heuristics (Untied)]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-untied-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-untied-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-untied-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/Mistral-7B-v0.1-clpp-untied-ja
|
atsuki-yamaguchi
| 2024-04-22T09:01:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T08:46:02Z |
---
license: mit
language: ja
---
Mistral-7B Japanese [LAPT + CLP+ (Untied)]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-clpp-untied-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-clpp-untied-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-clpp-untied-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-untied-ar
|
atsuki-yamaguchi
| 2024-04-22T09:01:11Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T17:44:23Z |
---
license: mit
language: ar
---
Mistral-7B Arabic [LAPT + Heuristics (Untied)]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-untied-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-heuristics-untied-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/tigerbot-7b-base-heuristics-untied-ja
|
atsuki-yamaguchi
| 2024-04-22T09:01:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T17:38:06Z |
---
license: mit
language: ja
---
TigerBot-7B Japanese [LAPT + Heuristics (Untied)]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-heuristics-untied-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-heuristics-untied-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-heuristics-untied-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/tigerbot-7b-base-clpp-untied-ja
|
atsuki-yamaguchi
| 2024-04-22T09:00:59Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ja",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T17:32:35Z |
---
license: mit
language: ja
---
TigerBot-7B Japanese [LAPT + CLP+ (Untied)]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-clpp-untied-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-clpp-untied-ja"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-clpp-untied-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/tigerbot-7b-base-heuristics-untied-ar
|
atsuki-yamaguchi
| 2024-04-22T09:00:57Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ar",
"arxiv:2402.10712",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-21T17:31:41Z |
---
license: mit
language: ar
---
TigerBot-7B Arabic [LAPT + Heuristics (Untied)]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-heuristics-untied-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"aubmindlab/aragpt2-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-heuristics-untied-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
shubham11/mistralrelease100
|
shubham11
| 2024-04-22T09:00:10Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T09:00:07Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** shubham11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
atsuki-yamaguchi/Mistral-7B-v0.1-lapt-sw
|
atsuki-yamaguchi
| 2024-04-22T08:57:34Z | 0 | 0 | null |
[
"safetensors",
"sw",
"arxiv:2402.10712",
"license:mit",
"region:us"
] | null | 2024-02-19T12:21:03Z |
---
license: mit
language: sw
---
Mistral-7B Swahili [LAPT]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-lapt-sw"
)
tokenizer = AutoTokenizer.from_pretrained(
"mistralai/Mistral-7B-v0.1"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-lapt-sw",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/Mistral-7B-v0.1-lapt-ja
|
atsuki-yamaguchi
| 2024-04-22T08:57:33Z | 0 | 0 | null |
[
"safetensors",
"ja",
"arxiv:2402.10712",
"license:mit",
"region:us"
] | null | 2024-02-19T12:19:05Z |
---
license: mit
language: ja
---
Mistral-7B Japanese [LAPT]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-lapt-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"mistralai/Mistral-7B-v0.1"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Mistral-7B-v0.1-lapt-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/tigerbot-7b-base-lapt-sw
|
atsuki-yamaguchi
| 2024-04-22T08:57:31Z | 0 | 0 | null |
[
"safetensors",
"sw",
"arxiv:2402.10712",
"license:mit",
"region:us"
] | null | 2024-02-19T12:12:47Z |
---
license: mit
language: sw
---
TigerBot-7B Swahili [LAPT]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-lapt-sw"
)
tokenizer = AutoTokenizer.from_pretrained(
"TigerResearch/tigerbot-7b-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-lapt-sw",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/tigerbot-7b-base-lapt-ar
|
atsuki-yamaguchi
| 2024-04-22T08:57:30Z | 0 | 0 | null |
[
"safetensors",
"ar",
"arxiv:2402.10712",
"license:mit",
"region:us"
] | null | 2024-02-19T12:06:48Z |
---
license: mit
language: ar
---
TigerBot-7B Arabic [LAPT]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-lapt-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"TigerResearch/tigerbot-7b-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-lapt-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/tigerbot-7b-base-lapt-ja
|
atsuki-yamaguchi
| 2024-04-22T08:57:30Z | 0 | 0 | null |
[
"safetensors",
"ja",
"arxiv:2402.10712",
"license:mit",
"region:us"
] | null | 2024-02-19T12:10:48Z |
---
license: mit
language: ja
---
TigerBot-7B Japanese [LAPT]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-lapt-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"TigerResearch/tigerbot-7b-base"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/tigerbot-7b-base-lapt-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-lapt-ja
|
atsuki-yamaguchi
| 2024-04-22T08:57:27Z | 0 | 0 | null |
[
"safetensors",
"ja",
"arxiv:2402.10712",
"license:mit",
"region:us"
] | null | 2024-02-19T12:02:24Z |
---
license: mit
language: ja
---
BLOOM-7B Japanese [LAPT]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-lapt-ja"
)
tokenizer = AutoTokenizer.from_pretrained(
"bigscience/bloom-7b1"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-lapt-ja",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
atsuki-yamaguchi/bloom-7b1-lapt-ar
|
atsuki-yamaguchi
| 2024-04-22T08:57:25Z | 0 | 0 | null |
[
"safetensors",
"ar",
"arxiv:2402.10712",
"license:mit",
"region:us"
] | null | 2024-02-19T11:58:19Z |
---
license: mit
language: ar
---
BLOOM-7B Arabic [LAPT]
===
## How to use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-lapt-ar"
)
tokenizer = AutoTokenizer.from_pretrained(
"bigscience/bloom-7b1"
)
# w/ GPU
model = AutoPeftModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/bloom-7b1-lapt-ar",
device_map="auto",
load_in_8bit=True,
)
```
## Citation
```
@article{yamaguchi2024empirical,
title={An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative {LLM} Inference},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
journal={ArXiv},
year={2024},
volume={abs/2402.10712},
url={https://arxiv.org/abs/2402.10712}
}
```
## Link
For more details, please visit https://github.com/gucci-j/llm-cva
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.