modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Dhahlan2000/Chitti-Large-model-for-GPT-v11
|
Dhahlan2000
| 2024-06-22T12:26:43Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Dhahlan2000/Chitti-Base-model-for-GPT-v10",
"base_model:finetune:Dhahlan2000/Chitti-Base-model-for-GPT-v10",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-22T12:26:11Z |
---
license: apache-2.0
base_model: Dhahlan2000/Chitti-Base-model-for-GPT-v10
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Chitti-Large-model-for-GPT-v11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Chitti-Large-model-for-GPT-v11
This model is a fine-tuned version of [Dhahlan2000/Chitti-Base-model-for-GPT-v10](https://huggingface.co/Dhahlan2000/Chitti-Base-model-for-GPT-v10) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8143
- Bleu: 5.3746
- Gen Len: 12.2533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.0362 | 1.0 | 9282 | 2.8427 | 4.9768 | 12.194 |
| 3.0372 | 2.0 | 18564 | 2.8305 | 5.2045 | 12.3033 |
| 3.0038 | 3.0 | 27846 | 2.8253 | 5.2307 | 12.268 |
| 2.9933 | 4.0 | 37128 | 2.8188 | 5.4433 | 12.3027 |
| 2.9947 | 5.0 | 46410 | 2.8139 | 5.3825 | 12.2993 |
| 2.9752 | 6.0 | 55692 | 2.8143 | 5.3746 | 12.2533 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
aifeifei798/llama3-8B-DarkIdol-1.1
|
aifeifei798
| 2024-06-22T12:25:40Z | 7 | 5 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"roleplay",
"llama3",
"sillytavern",
"idol",
"conversational",
"en",
"ja",
"zh",
"arxiv:2403.19522",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-20T12:31:26Z |
---
license: llama3
language:
- en
- ja
- zh
tags:
- roleplay
- llama3
- sillytavern
- idol
---
# Special Thanks:
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request
# Model Description:
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
- Roleplay
- Specialized in various role-playing scenarios
- more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0/resolve/main/DarkIdol_test_openai_api_lmstudio.py?download=true)

# Chang Log
### 2024-06-20
- Using the underlying model.(Meta-Llama-3-8B-Instruct)
- Integrating the numerous models I previously created.look at base_model.
# Stop Strings
```python
stop = [
"## Instruction:",
"### Instruction:",
"<|end_of_text|>",
" //:",
"</s>",
"<3```"
],
```
# Model Use
- Koboldcpp https://github.com/LostRuins/koboldcpp
- Since KoboldCpp is taking a while to update with the latest llama.cpp commits, I'll recommend this [fork](https://github.com/Nexesenex/kobold.cpp) if anyone has issues.
- LM Studio https://lmstudio.ai/
- llama.cpp https://github.com/ggerganov/llama.cpp
- Backyard AI https://backyard.ai/
- Meet Layla,Layla is an AI chatbot that runs offline on your device.No internet connection required.No censorship.Complete privacy.Layla Lite https://www.layla-network.ai/
- Layla Lite llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request/blob/main/llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf?download=true
- more gguf at https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request
# character
- https://character-tavern.com/
- https://characterhub.org/
- https://pygmalion.chat/
- https://aetherroom.club/
- https://backyard.ai/
- Layla AI chatbot
### If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/Nexesenex/kobold.cpp).
### To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16)
* You can load the **mmproj** by using the corresponding section in the interface:

### Thank you:
To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts.
- Hastagaras
- Gryphe
- cgato
- ChaoticNeutrals
- mergekit
- merge
- transformers
- llama
- Nitral-AI
- .........
---
base_model:
- cgato/L3-TheSpice-8b-v0.8.3
- aifeifei798/llama3-8B-feifei-1.0
- aifeifei798/Meta-Llama-3-8B-Instruct
- Nitral-AI/Hathor_RP-v.01-L3-8B
- aifeifei798/llama3-8B-aifeifei-1.2
- aifeifei798/llama3-8B-aifeifei-1.3
- aifeifei798/llama3-8B-DarkIdol-1.0
- aifeifei798/llama3-8B-aifeifei-1.0
- aifeifei798/llama3-8B-aifeifei-1.1
library_name: transformers
tags:
- mergekit
- merge
---
# llama3-8B-DarkIdol-1.1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [aifeifei798/Meta-Llama-3-8B-Instruct](https://huggingface.co/aifeifei798/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
* [aifeifei798/llama3-8B-feifei-1.0](https://huggingface.co/aifeifei798/llama3-8B-feifei-1.0)
* [Nitral-AI/Hathor_RP-v.01-L3-8B](https://huggingface.co/Nitral-AI/Hathor_RP-v.01-L3-8B)
* [aifeifei798/llama3-8B-aifeifei-1.2](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.2)
* [aifeifei798/llama3-8B-aifeifei-1.3](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.3)
* [aifeifei798/llama3-8B-DarkIdol-1.0](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0)
* [aifeifei798/llama3-8B-aifeifei-1.0](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.0)
* [aifeifei798/llama3-8B-aifeifei-1.1](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cgato/L3-TheSpice-8b-v0.8.3
- model: Nitral-AI/Hathor_RP-v.01-L3-8B
- model: aifeifei798/llama3-8B-feifei-1.0
- model: aifeifei798/llama3-8B-aifeifei-1.0
- model: aifeifei798/llama3-8B-aifeifei-1.1
- model: aifeifei798/llama3-8B-aifeifei-1.2
- model: aifeifei798/llama3-8B-aifeifei-1.3
- model: aifeifei798/llama3-8B-DarkIdol-1.0
merge_method: model_stock
base_model: aifeifei798/Meta-Llama-3-8B-Instruct
dtype: bfloat16
```
|
damgomz/ft_16_1e6_base_x2
|
damgomz
| 2024-06-22T12:24:17Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-21T15:25:03Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 76306.07603764534 |
| Emissions (Co2eq in kg) | 0.0461739677171568 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9008335453381126 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0794846584613123 |
| Consumed energy (kWh) | 0.9803182037994268 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.14688919637246728 |
| Emissions (Co2eq in kg) | 0.029886546448077755 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_1e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.721941 | 0.513745 |
| 1 | 0.466689 | 0.346496 | 0.871139 |
| 2 | 0.289983 | 0.264863 | 0.901954 |
| 3 | 0.229790 | 0.238954 | 0.913884 |
| 4 | 0.198653 | 0.229885 | 0.917292 |
| 5 | 0.177236 | 0.230246 | 0.905920 |
| 6 | 0.155984 | 0.222021 | 0.914660 |
|
damgomz/ft_16_1e6_base_x1
|
damgomz
| 2024-06-22T12:18:02Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:58:09Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 75931.89353346825 |
| Emissions (Co2eq in kg) | 0.0459475461838938 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.8964161302200631 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0790949244049688 |
| Consumed energy (kWh) | 0.9755110546250364 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.14616889505192637 |
| Emissions (Co2eq in kg) | 0.029739991633941726 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_1e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.723550 | 0.695135 |
| 1 | 0.502504 | 0.387507 | 0.885544 |
| 2 | 0.326378 | 0.289105 | 0.899591 |
| 3 | 0.243015 | 0.242274 | 0.912829 |
| 4 | 0.195490 | 0.229954 | 0.917272 |
| 5 | 0.162022 | 0.223088 | 0.913880 |
| 6 | 0.128879 | 0.232340 | 0.903057 |
|
SoumilB7/School_categoriser
|
SoumilB7
| 2024-06-22T12:10:48Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-22T11:37:29Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
|
pemix09/satellite-images-segmentation
|
pemix09
| 2024-06-22T12:09:21Z | 1 | 1 | null |
[
"tensorboard",
"license:mit",
"region:us"
] | null | 2024-06-08T13:34:18Z |
---
license: mit
---
Model is trained to recognise 6 classes of objects: road, car, water, forest, grass and building.
Images from any source can be provided ex: aerial images, UAV images etc. Yolo v8 was used, as it provides CNN implementation
which helps recognize classes regardless of image zoom and it provides high provicency. Reasearch was conducted on Adam Mickiewicz's University in Poznań, Poland,
by:
- Przemysław Klejno
- Karol Hadzicki
- Marcin Jakubik
|
m3rg-iitd/matscibert
|
m3rg-iitd
| 2024-06-22T12:00:54Z | 3,374 | 17 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
language:
- en
metrics:
- accuracy
- f1
- precision
- recall
library_name: transformers
---
# MatSciBERT
## A Materials Domain Language Model for Text Mining and Information Extraction
This is the pretrained model presented in [MatSciBERT: A materials domain language model for text mining and information extraction](https://rdcu.be/cMAp5), which is a BERT model trained on material science research papers.
The training corpus comprises papers related to the broad category of materials: alloys, glasses, metallic glasses, cement and concrete. We have utilised the abstracts and full text of papers(when available). All the research papers have been downloaded from [ScienceDirect](https://www.sciencedirect.com/) using the [Elsevier API](https://dev.elsevier.com/). The detailed methodology is given in the paper.
The codes for pretraining and finetuning on downstream tasks are shared on [GitHub](https://github.com/m3rg-repo/MatSciBERT).
If you find this useful in your research, please consider citing:
```
@article{gupta_matscibert_2022,
title = "{MatSciBERT}: A Materials Domain Language Model for Text Mining and Information Extraction",
author = "Gupta, Tanishq and
Zaki, Mohd and
Krishnan, N. M. Anoop and
Mausam",
year = "2022",
month = may,
journal = "npj Computational Materials",
volume = "8",
number = "1",
pages = "102",
issn = "2057-3960",
url = "https://www.nature.com/articles/s41524-022-00784-w",
doi = "10.1038/s41524-022-00784-w"
}
```
|
marsggbo/t2-small-token-pattern-predictor-switch32-xsum
|
marsggbo
| 2024-06-22T11:59:58Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-22T11:59:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
welsachy/roberta-base-finetuned-depression
|
welsachy
| 2024-06-22T11:58:28Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T11:01:43Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-finetuned-depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-depression
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7662
- Precision: 0.8912
- Recall: 0.9136
- F1: 0.9018
- Accuracy: 0.9104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 469 | 0.5219 | 0.8220 | 0.7921 | 0.8000 | 0.8603 |
| 0.602 | 2.0 | 938 | 0.6344 | 0.9039 | 0.8257 | 0.8538 | 0.8753 |
| 0.3573 | 3.0 | 1407 | 0.4821 | 0.8818 | 0.8902 | 0.8859 | 0.8870 |
| 0.2511 | 4.0 | 1876 | 0.6265 | 0.8511 | 0.8965 | 0.8676 | 0.8934 |
| 0.1614 | 5.0 | 2345 | 0.5439 | 0.8908 | 0.8992 | 0.8919 | 0.9041 |
| 0.1107 | 6.0 | 2814 | 0.6237 | 0.8838 | 0.8990 | 0.8886 | 0.9009 |
| 0.0756 | 7.0 | 3283 | 0.6915 | 0.8930 | 0.9062 | 0.8988 | 0.9083 |
| 0.057 | 8.0 | 3752 | 0.6572 | 0.8736 | 0.9107 | 0.8905 | 0.9062 |
| 0.0664 | 9.0 | 4221 | 0.8022 | 0.8692 | 0.8987 | 0.8804 | 0.8977 |
| 0.0392 | 10.0 | 4690 | 0.7953 | 0.8931 | 0.8847 | 0.8844 | 0.8977 |
| 0.0472 | 11.0 | 5159 | 0.7757 | 0.8951 | 0.8886 | 0.8885 | 0.8998 |
| 0.0375 | 12.0 | 5628 | 0.7821 | 0.8881 | 0.9029 | 0.8939 | 0.9072 |
| 0.0292 | 13.0 | 6097 | 0.8124 | 0.8793 | 0.8982 | 0.8870 | 0.9009 |
| 0.0373 | 14.0 | 6566 | 0.9106 | 0.8774 | 0.8818 | 0.8735 | 0.8934 |
| 0.0227 | 15.0 | 7035 | 0.8325 | 0.8876 | 0.8855 | 0.8825 | 0.8966 |
| 0.0249 | 16.0 | 7504 | 0.7662 | 0.8912 | 0.9136 | 0.9018 | 0.9104 |
| 0.0249 | 17.0 | 7973 | 0.8383 | 0.8804 | 0.8905 | 0.8833 | 0.8955 |
| 0.0245 | 18.0 | 8442 | 0.8073 | 0.8844 | 0.9000 | 0.8907 | 0.9030 |
| 0.0188 | 19.0 | 8911 | 0.8137 | 0.8850 | 0.9012 | 0.8917 | 0.9041 |
| 0.0203 | 20.0 | 9380 | 0.8234 | 0.8850 | 0.8993 | 0.8905 | 0.9030 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
John6666/t-ponynai3-v51wo-sdxl-spo
|
John6666
| 2024-06-22T11:55:25Z | 2,500 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T11:49:45Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
- SPO
---
Original model is [here](https://civitai.com/models/317902/t-ponynai3).
|
Echelon-AI/marathi-mistral-v0.2
|
Echelon-AI
| 2024-06-22T11:49:05Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mr",
"dataset:wikimedia/wikipedia",
"dataset:smallstepai/marathi-instruction-tuning-alpaca",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T11:18:25Z |
---
license: apache-2.0
datasets:
- wikimedia/wikipedia
- smallstepai/marathi-instruction-tuning-alpaca
language:
- mr
---
<!-- Provide a quick summary of what the model is/does. -->
Introducing Mistral Marathi an advanced language model excelling in marathi language
### Model Description
Introducing Mistral Marathi, an advanced language model designed to excel in the Marathi language. This model has been meticulously fine-tuned on Marathi Wikipedia and Alpaca Marathi Instruct, ensuring it delivers exceptional performance in understanding and generating Marathi text.
Mistral Marathi leverages the vast repository of knowledge found on Marathi Wikipedia, encompassing a wide range of topics and linguistic nuances. This extensive dataset enables the model to provide accurate and contextually relevant responses, making it a valuable tool for various applications, from academic research to everyday communication.
Additionally, the integration of Alpaca Marathi Instruct further enhances the model's capabilities. Alpaca Marathi Instruct focuses on providing structured and instructional content, allowing Mistral Marathi to excel in educational settings and complex instruction-based interactions. This combination ensures that the model is not only fluent in Marathi but also adept at conveying information in a clear and precise manner.
Whether you're looking to engage in conversational dialogues, seek assistance with Marathi language tasks, or require detailed and structured information, Mistral Marathi stands out as a cutting-edge solution, pushing the boundaries of what a language model can achieve in the Marathi linguistic landscape.
- **Developed by:** Echelon AI
- **Finetuned from model :** Mistral 7B
## Uses
1. **Academic Research Assistance**:
Mistral Marathi can be utilized by researchers and students to gather detailed and accurate information on various topics in Marathi. By leveraging its extensive training on Marathi Wikipedia, the model can provide summaries, explanations, and insights that are critical for academic purposes.
2. **Educational Content Creation**:
Educators can use Mistral Marathi to generate instructional materials and resources in Marathi. The integration of Alpaca Marathi Instruct allows the model to create structured lessons, exercises, and explanations, making it an excellent tool for developing educational content tailored to Marathi-speaking learners.
3. **Conversational AI and Customer Support**:
Businesses and organizations can implement Mistral Marathi in their customer support systems to provide efficient and accurate responses in Marathi. The model's ability to understand and generate natural language makes it ideal for handling customer inquiries, providing information, and resolving issues in a conversational manner.
|
welsachy/distilbert-base-uncased-finetuned-depression
|
welsachy
| 2024-06-22T11:47:59Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T18:12:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-depression
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6721
- Precision: 0.9018
- Recall: 0.8881
- F1: 0.8946
- Accuracy: 0.9168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 469 | 0.3914 | 0.9136 | 0.7542 | 0.8087 | 0.8678 |
| 0.5449 | 2.0 | 938 | 0.3944 | 0.8652 | 0.8677 | 0.8644 | 0.8977 |
| 0.2679 | 3.0 | 1407 | 0.4355 | 0.8717 | 0.8713 | 0.8703 | 0.9009 |
| 0.1516 | 4.0 | 1876 | 0.4509 | 0.8757 | 0.8809 | 0.8779 | 0.9083 |
| 0.0989 | 5.0 | 2345 | 0.4762 | 0.8861 | 0.8846 | 0.8854 | 0.9094 |
| 0.0666 | 6.0 | 2814 | 0.4829 | 0.8878 | 0.8890 | 0.8883 | 0.9126 |
| 0.0563 | 7.0 | 3283 | 0.5768 | 0.8918 | 0.8866 | 0.8885 | 0.9115 |
| 0.0349 | 8.0 | 3752 | 0.6874 | 0.8898 | 0.8644 | 0.8758 | 0.8987 |
| 0.0444 | 9.0 | 4221 | 0.6256 | 0.8804 | 0.8822 | 0.8790 | 0.9019 |
| 0.0301 | 10.0 | 4690 | 0.6354 | 0.8897 | 0.8750 | 0.8814 | 0.9030 |
| 0.0318 | 11.0 | 5159 | 0.7172 | 0.8894 | 0.8682 | 0.8770 | 0.9009 |
| 0.0222 | 12.0 | 5628 | 0.6906 | 0.9001 | 0.8700 | 0.8834 | 0.9019 |
| 0.0243 | 13.0 | 6097 | 0.7263 | 0.8898 | 0.8732 | 0.8800 | 0.9019 |
| 0.0172 | 14.0 | 6566 | 0.6936 | 0.8945 | 0.8766 | 0.8846 | 0.9072 |
| 0.0204 | 15.0 | 7035 | 0.7428 | 0.9081 | 0.8730 | 0.8889 | 0.9051 |
| 0.0162 | 16.0 | 7504 | 0.7202 | 0.8966 | 0.8748 | 0.8846 | 0.9062 |
| 0.0162 | 17.0 | 7973 | 0.6721 | 0.9018 | 0.8881 | 0.8946 | 0.9168 |
| 0.0172 | 18.0 | 8442 | 0.7664 | 0.9037 | 0.8706 | 0.8854 | 0.9030 |
| 0.0156 | 19.0 | 8911 | 0.7166 | 0.8985 | 0.8784 | 0.8876 | 0.9094 |
| 0.0158 | 20.0 | 9380 | 0.7327 | 0.8966 | 0.8748 | 0.8846 | 0.9062 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mjm4dl/llama3_8B_slotfilling_intent_Prompt2_SlotName2_r8_dataold_plus_16june_igonre_input_20240622
|
mjm4dl
| 2024-06-22T11:47:05Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T11:44:13Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SiMajid/opt-350-value
|
SiMajid
| 2024-06-22T11:46:37Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T11:45:55Z |
---
license: other
base_model: facebook/opt-350m
tags:
- trl
- reward-trainer
- generated_from_trainer
model-index:
- name: opt-350-value
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350-value
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
DFofanov78/llama-3-8b-Instruct-bnb-4bit
|
DFofanov78
| 2024-06-22T11:45:55Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"unsloth",
"llama-3",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2024-06-22T02:22:07Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- llama
- llama-3
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
Directly quantized 4bit model with `bitsandbytes`.
We have a Google Colab Tesla T4 notebook for Llama-3 8b here: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less |
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
ZidanAfdhalulIhsaan/Zidan_model_output_new
|
ZidanAfdhalulIhsaan
| 2024-06-22T11:38:27Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T10:20:19Z |
---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Zidan_model_output_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Zidan_model_output_new
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7244
- Accuracy: 0.8545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 124 | 0.8299 | 0.6364 |
| No log | 2.0 | 248 | 0.4376 | 0.8182 |
| No log | 3.0 | 372 | 0.4606 | 0.8364 |
| No log | 4.0 | 496 | 0.4147 | 0.8182 |
| 0.6148 | 5.0 | 620 | 0.3365 | 0.8727 |
| 0.6148 | 6.0 | 744 | 0.3996 | 0.8545 |
| 0.6148 | 7.0 | 868 | 0.5302 | 0.8364 |
| 0.6148 | 8.0 | 992 | 0.5224 | 0.8545 |
| 0.1989 | 9.0 | 1116 | 0.5880 | 0.8727 |
| 0.1989 | 10.0 | 1240 | 0.6525 | 0.8545 |
| 0.1989 | 11.0 | 1364 | 0.5338 | 0.8909 |
| 0.1989 | 12.0 | 1488 | 0.7244 | 0.8545 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
youssef227/llama-3-8b-Instruct-bnb-telcom-3
|
youssef227
| 2024-06-22T11:35:44Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-21T23:02:20Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** youssef227
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sjunique/results_split_2
|
sjunique
| 2024-06-22T11:31:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T09:24:27Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: results_split_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_split_2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|
John6666/anima-pencil-xl-v4-sdxl-spo
|
John6666
| 2024-06-22T11:28:21Z | 1,605 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T11:22:53Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- SPO
---
Original model is [here](https://huggingface.co/bluepen5805/anima_pencil-XL).
|
Xuezha/RecombinationTransformer-small-base
|
Xuezha
| 2024-06-22T11:12:58Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"RecombinationTransformer",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-06-22T11:06:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hchcsuim/batch-size16_Celeb-DF-v2_opencv-1FPS_faces-expand20-aligned_unaugmentation
|
hchcsuim
| 2024-06-22T11:09:32Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-22T10:06:18Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_Celeb-DF-v2_opencv-1FPS_faces-expand20-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9920730570913835
- name: Precision
type: precision
value: 0.9940958095316375
- name: Recall
type: recall
value: 0.9971556166265888
- name: F1
type: f1
value: 0.9956233621907885
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_Celeb-DF-v2_opencv-1FPS_faces-expand20-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0227
- Accuracy: 0.9921
- Precision: 0.9941
- Recall: 0.9972
- F1: 0.9956
- Roc Auc: 0.9986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.0496 | 0.9994 | 1257 | 0.0227 | 0.9921 | 0.9941 | 0.9972 | 0.9956 | 0.9986 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
limaatulya/my_awesome_billsum_model_72
|
limaatulya
| 2024-06-22T11:08:29Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-22T11:06:00Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: my_awesome_billsum_model_72
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model_72
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4308
- eval_rouge1: 0.4384
- eval_rouge2: 0.3029
- eval_rougeL: 0.4176
- eval_rougeLsum: 0.4167
- eval_gen_len: 15.8125
- eval_runtime: 8.9074
- eval_samples_per_second: 5.389
- eval_steps_per_second: 0.337
- epoch: 2.0
- step: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Yash-Shindey/ppo-Huggy
|
Yash-Shindey
| 2024-06-22T11:01:18Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-06-22T11:01:13Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Yash-Shindey/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
John6666/swam-pony-xl-v1-sdxl
|
John6666
| 2024-06-22T11:00:43Z | 2,387 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T10:55:30Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- pony
---
Original model is [here](https://civitai.com/models/416417/swamponyxl).
|
NotoriousH2/Qwen1.5b-ref
|
NotoriousH2
| 2024-06-22T10:57:34Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T10:54:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/pony-diffusion-v6-xl-sdxl-spo
|
John6666
| 2024-06-22T10:49:23Z | 2,748 | 5 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T06:45:42Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
- SPO
---
Original model is [here](https://civitai.com/models/257749/pony-diffusion-v6-xl).
|
kennyTheo/Test_bert-finetuned-ner
|
kennyTheo
| 2024-06-22T10:39:08Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-22T10:10:39Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Test_bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9340042897211681
- name: Recall
type: recall
value: 0.9527095254123191
- name: F1
type: f1
value: 0.9432641839540115
- name: Accuracy
type: accuracy
value: 0.9868134455760287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Test_bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0581
- Precision: 0.9340
- Recall: 0.9527
- F1: 0.9433
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0785 | 1.0 | 1756 | 0.0631 | 0.9074 | 0.9355 | 0.9213 | 0.9827 |
| 0.0373 | 2.0 | 3512 | 0.0637 | 0.9304 | 0.9475 | 0.9389 | 0.9857 |
| 0.0223 | 3.0 | 5268 | 0.0581 | 0.9340 | 0.9527 | 0.9433 | 0.9868 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
fbaldassarri/modello-italia-9b-autoround-w4g128-gpu
|
fbaldassarri
| 2024-06-22T10:26:21Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"conversational",
"it",
"arxiv:2309.05516",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-06-22T09:50:00Z |
---
license: mit
language:
- it
---
# Model Card for Modello Italia 9B INT4 group-size 128 GPU-optimized
This an UNOFFICIAL conversion/quantization of the OFFICIAL model checkpoint of *"Modello Italia 9B"*, Large Language Model (LLM) developed by [iGenius](https://it.igenius.ai/) in collaboration with [CINECA](https://www.cineca.it/).
* More information about Modello Italia: [click here](https://it.igenius.ai/language-models).
This model has been quantized in INT4, group-size 128, and optimized for inferencing on GPU.
## 🚨 Disclaimers
* This is an UNOFFICIAL quantization of the OFFICIAL model checkpoint released by iGenius.
* This model is based also on the conversion made for HF Transformers by [Sapienza NLP, Sapienza University of Rome](https://huggingface.co/sapienzanlp).
* The original model was developed using LitGPT, therefore, the weights need to be converted before they can be used with Hugging Face transformers.
## 🚨 Terms and Conditions
* **Note:** By using this model, you accept the iGenius' [**terms and conditions**](https://secure.igenius.ai/legal/italia_terms_and_conditions.pdf).
## 🚨 Reproducibility
This model has been quantized using Intel [auto-round](https://github.com/intel/auto-round), based on [SignRound technique](https://arxiv.org/pdf/2309.05516v4).
```
git clone https://github.com/fbaldassarri/model-conversion.git
cd model-conversion
mkdir models
cd models
huggingface-cli download --resume-download --local-dir sapienzanlp_modello-italia-9b --local-dir-use-symlinks False sapienzanlp/modello-italia-9b
```
Then,
```
python3 ./examples/language-modeling/main.py \
--model_name ./models/sapienzanlp_modello-italia-9b \
--device 0 \
--group_size 128 \
--bits 4 \
--iters 1000 \
--deployment_device 'gpu' \
--output_dir "./models/sapienzanlp_modello-italia-9b-int4" \
--train_bs 2 \
--gradient_accumulate_steps 8
```
## 🚨 Biases and Risks
From the terms and conditions of iGenius for Modello Italia:
> Modello Italia è concepito per essere utilizzato da tutti e per adattarsi a una vasta gamma di casi
d'uso. È stato progettato con l'obiettivo di essere accessibile a persone provenienti da
background, esperienze e prospettive diverse. Modello Italia si rivolge agli utenti e alle loro
esigenze senza inserire giudizi superflui o normative, riconoscendo al contempo che anche
contenuti potenzialmente problematici in determinati contesti possono avere scopi validi in altri.
Il rispetto per la dignità e l'autonomia di tutti gli utenti, specialmente in termini di libertà di
pensiero ed espressione, è un pilastro fondamentale del suo design. Tuttavia, essendo una nuova
tecnologia, Modello Italia comporta rischi legati al suo utilizzo. I test condotti finora sono stati
eseguiti in italiano e non hanno potuto coprire tutte le possibili situazioni. Pertanto, come per
tutti gli LLM, non è possibile prevedere in anticipo gli output di Modello Italia e il modello
potrebbe in alcuni casi generare risposte imprecise, tendenziose o altre risposte discutibili. Prima
di utilizzare Modello Italia in qualsiasi contesto, gli sviluppatori sono fortemente incoraggiati a
eseguire test di sicurezza e adattamento specifici per le loro applicazioni.
We are aware of the biases and potential problematic/toxic content that current pretrained large language models exhibit: more specifically, as probabilistic models of (Italian and English) languages, they reflect and amplify the biases of their training data.
For more information about this issue, please refer to our survey paper:
* [Biases in Large Language Models: Origins, Inventory, and Discussion](https://dl.acm.org/doi/full/10.1145/3597307)
## Model architecture
* The model architecture is **based on GPT-NeoX**.
## Results
**Modello Italia 9B INT4 group-size 128 GPU-optimized** has not been evaluated on standard benchmarks yet.
If you would like to contribute with your evaluation, please feel free to submit a pull request.
|
tdolega/rag-tge_pl_Llama-3-8B
|
tdolega
| 2024-06-22T10:21:27Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"pl",
"dataset:tdolega/rag-tge_finetuning-dataset_pl",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-24T06:15:38Z |
---
library_name: transformers
datasets:
- tdolega/rag-tge_finetuning-dataset_pl
language:
- pl
license: llama3
---
[Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) finetuned on [rag-tge_finetuning-dataset_pl](https://huggingface.co/datasets/tdolega/rag-tge_finetuning-dataset_pl).for [rag-tge](https://github.com/tdolega/rag-tge) project. Available as safetensors BF16 and GGUF Q8.
|
Fnasrin/medical-chatbot
|
Fnasrin
| 2024-06-22T10:18:40Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T10:02:57Z |
---
license: apache-2.0
---
|
Michael-s-8/mental_chatbot
|
Michael-s-8
| 2024-06-22T10:18:37Z | 5 | 0 | null |
[
"safetensors",
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-22T09:49:37Z |
---
license: apache-2.0
---
|
inrealm/bge-base-all-nli-triplet
|
inrealm
| 2024-06-22T10:15:28Z | 15 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1000",
"loss:MultipleNegativesRankingLoss",
"dataset_size:3000",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-small-en",
"base_model:finetune:BAAI/bge-small-en",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-22T07:24:24Z |
---
base_model: BAAI/bge-small-en
datasets:
- sentence-transformers/all-nli
language:
- en
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1000
- loss:MultipleNegativesRankingLoss
- dataset_size:3000
widget:
- source_sentence: A man is jumping unto his filthy bed.
sentences:
- A young male is looking at a newspaper while 2 females walks past him.
- The bed is dirty.
- The man is on the moon.
- source_sentence: A carefully balanced male stands on one foot near a clean ocean
beach area.
sentences:
- A man is ouside near the beach.
- Three policemen patrol the streets on bikes
- A man is sitting on his couch.
- source_sentence: The man is wearing a blue shirt.
sentences:
- Near the trashcan the man stood and smoked
- A man in a blue shirt leans on a wall beside a road with a blue van and red car
with water in the background.
- A man in a black shirt is playing a guitar.
- source_sentence: The girls are outdoors.
sentences:
- Two girls riding on an amusement part ride.
- a guy laughs while doing laundry
- Three girls are standing together in a room, one is listening, one is writing
on a wall and the third is talking to them.
- source_sentence: A construction worker peeking out of a manhole while his coworker
sits on the sidewalk smiling.
sentences:
- A worker is looking out of a manhole.
- A man is giving a presentation.
- The workers are both inside the manhole.
---
# SentenceTransformer based on BAAI/bge-small-en
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) on the [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) <!-- at revision 2275a7bdee235e9b4f01fa73aa60d3311983cfea -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("inrealm/bge-base-all-nli-triplet")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### sentence-transformers/all-nli
* Dataset: [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 3,000 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.46 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.81 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### sentence-transformers/all-nli
* Dataset: [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.95 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.78 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.35 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss |
|:------:|:----:|:-------------:|:------:|
| 0.5319 | 100 | 0.7969 | 0.4318 |
| 1.0638 | 200 | 0.2888 | 0.4764 |
| 1.5957 | 300 | 0.025 | 0.5072 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
aifeifei798/Meta-Llama-3-70B-Instruct
|
aifeifei798
| 2024-06-22T10:15:15Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T13:27:04Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase.
### download
```python
from modelscope import snapshot_download
snapshot_download("aifeifei798/Meta-Llama-3-70B-Instruct")
```
### Use with transformers
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-70B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
|
John6666/cocoa-mix-xl-v1-sdxl
|
John6666
| 2024-06-22T10:15:06Z | 2,340 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T10:10:15Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
---
Original model is [here](https://civitai.com/models/530602/cocoamixxl?modelVersionId=589640).
|
ILKT/2024-06-22_12-08-04
|
ILKT
| 2024-06-22T10:12:03Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ILKT",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-06-22T10:10:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/finance-Llama3-8B-GGUF
|
QuantFactory
| 2024-06-22T10:08:34Z | 517 | 15 | null |
[
"gguf",
"finance",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:GAIR/lima",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"arxiv:2406.14491",
"arxiv:2309.09530",
"base_model:instruction-pretrain/finance-Llama3-8B",
"base_model:quantized:instruction-pretrain/finance-Llama3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T06:59:42Z |
---
license: llama3
language:
- en
tags:
- finance
datasets:
- Open-Orca/OpenOrca
- GAIR/lima
- WizardLM/WizardLM_evol_instruct_V2_196k
base_model: instruction-pretrain/finance-Llama3-8B
pipeline_tag: text-generation
---
# QuantFactory/finance-Llama3-8B-GGUF
This is quantized version of [instruction-pretrain/finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B) created using llama.cpp
# Model Description
## Instruction Pre-Training: Language Models are Supervised Multitask Learners
This repo contains the **finance model developed from Llama3-8B** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. **In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.**
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400">
</p>
## Resources
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
- Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
- Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
- General Models Pre-Trained from Scratch:
- [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M)
- [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B)
- Domain-Specific Models Pre-Trained from Llama3-8B:
- [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
- [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
## Domain-Adaptive Continued Pre-Training
Following [AdaptLLM](https://huggingface.co/AdaptLLM/finance-chat), we augment the domain-specific raw corpora with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer).
For example, to chat with the finance-Llama3-8B model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("instruction-pretrain/finance-Llama3-8B")
tokenizer = AutoTokenizer.from_pretrained("instruction-pretrain/finance-Llama3-8B")
# Put your input here, NO prompt template is required
user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
MMM Chicago Stock Exchange, Inc.
1.500% Notes due 2026 MMM26 New York Stock Exchange
1.750% Notes due 2030 MMM30 New York Stock Exchange
1.500% Notes due 2031 MMM31 New York Stock Exchange
Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?'''
inputs = tokenizer(user_input, return_tensors="pt", add_special_tokens=True).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_new_tokens=400)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(pred)
```
## Model Citation
If you find our work helpful, please cite us:
[AdaptLLM](https://huggingface.co/papers/2309.09530)
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```
|
damgomz/ft_16_12e6_base_x4
|
damgomz
| 2024-06-22T10:03:30Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:53:30Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 67871.7352385521 |
| Emissions (Co2eq in kg) | 0.0410702330513391 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.8012618744567 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0706990828762451 |
| Consumed energy (kWh) | 0.871960957332947 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.13065309033421277 |
| Emissions (Co2eq in kg) | 0.026583096301766238 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_12e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.2e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.715508 | 0.110169 |
| 1 | 0.312260 | 0.231623 | 0.908621 |
| 2 | 0.189941 | 0.227896 | 0.919232 |
| 3 | 0.137408 | 0.241997 | 0.906275 |
| 4 | 0.092541 | 0.288514 | 0.904373 |
| 5 | 0.058133 | 0.352922 | 0.914804 |
| 6 | 0.040254 | 0.357574 | 0.901851 |
|
damgomz/ft_16_11e6_base_x1
|
damgomz
| 2024-06-22T09:59:23Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:54:51Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 67625.15619707108 |
| Emissions (Co2eq in kg) | 0.0409210267090644 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7983509253144281 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0704422363820175 |
| Consumed energy (kWh) | 0.8687931616964483 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1301784256793618 |
| Emissions (Co2eq in kg) | 0.026486519510519498 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_11e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.1e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.704286 | 0.437405 |
| 1 | 0.330187 | 0.242732 | 0.939067 |
| 2 | 0.194050 | 0.230007 | 0.907106 |
| 3 | 0.146359 | 0.229586 | 0.909991 |
| 4 | 0.098502 | 0.233764 | 0.932297 |
| 5 | 0.064628 | 0.255224 | 0.916755 |
| 6 | 0.047931 | 0.288994 | 0.918727 |
|
damgomz/ft_16_12e6_base_x2
|
damgomz
| 2024-06-22T09:57:25Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:53:21Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 67507.69425940514 |
| Emissions (Co2eq in kg) | 0.0408499487982181 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7969642254667167 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.070319883113106 |
| Consumed energy (kWh) | 0.86728410857982 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.12995231144935487 |
| Emissions (Co2eq in kg) | 0.026440513584933677 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_12e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.2e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.752345 | 0.498833 |
| 1 | 0.292457 | 0.224456 | 0.913777 |
| 2 | 0.176453 | 0.217162 | 0.920315 |
| 3 | 0.120570 | 0.244110 | 0.908668 |
| 4 | 0.072162 | 0.305259 | 0.928051 |
| 5 | 0.043422 | 0.345673 | 0.912749 |
| 6 | 0.036041 | 0.372368 | 0.895965 |
|
azizhayat37/Bliss_AI
|
azizhayat37
| 2024-06-22T09:53:38Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T21:17:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/niji-style-xl-v21-sdxl
|
John6666
| 2024-06-22T09:51:56Z | 2,385 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"illustration",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T09:47:10Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- illustration
---
Original model is [here](https://civitai.com/models/349639/niji-style-xl?modelVersionId=589718).
|
limaatulya/my_awesome_billsum_model_70
|
limaatulya
| 2024-06-22T09:50:23Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-22T09:46:13Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model_70
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model_70
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2720
- Rouge1: 0.9718
- Rouge2: 0.8861
- Rougel: 0.9312
- Rougelsum: 0.9298
- Gen Len: 5.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 0.2204 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 2.0 | 24 | 0.2198 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 3.0 | 36 | 0.2171 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 4.0 | 48 | 0.2171 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 5.0 | 60 | 0.2202 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 6.0 | 72 | 0.2240 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 7.0 | 84 | 0.2256 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 8.0 | 96 | 0.2194 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 9.0 | 108 | 0.2187 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 10.0 | 120 | 0.2168 | 0.975 | 0.9038 | 0.9399 | 0.9381 | 5.0417 |
| No log | 11.0 | 132 | 0.2171 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 12.0 | 144 | 0.2187 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 13.0 | 156 | 0.2261 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 14.0 | 168 | 0.2277 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 15.0 | 180 | 0.2269 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 16.0 | 192 | 0.2309 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 17.0 | 204 | 0.2321 | 0.976 | 0.8915 | 0.9359 | 0.9351 | 5.125 |
| No log | 18.0 | 216 | 0.2273 | 0.976 | 0.8915 | 0.9359 | 0.9351 | 5.125 |
| No log | 19.0 | 228 | 0.2230 | 0.979 | 0.9109 | 0.9443 | 0.9428 | 5.1042 |
| No log | 20.0 | 240 | 0.2208 | 0.979 | 0.9109 | 0.9443 | 0.9428 | 5.1042 |
| No log | 21.0 | 252 | 0.2174 | 0.975 | 0.9038 | 0.9399 | 0.9381 | 5.0417 |
| No log | 22.0 | 264 | 0.2158 | 0.975 | 0.9038 | 0.9399 | 0.9381 | 5.0417 |
| No log | 23.0 | 276 | 0.2197 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 24.0 | 288 | 0.2168 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 25.0 | 300 | 0.2211 | 0.975 | 0.9038 | 0.9399 | 0.9381 | 5.0417 |
| No log | 26.0 | 312 | 0.2261 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 27.0 | 324 | 0.2238 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 28.0 | 336 | 0.2252 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 29.0 | 348 | 0.2311 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 30.0 | 360 | 0.2372 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 31.0 | 372 | 0.2368 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 32.0 | 384 | 0.2358 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 33.0 | 396 | 0.2330 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| No log | 34.0 | 408 | 0.2289 | 0.9759 | 0.9062 | 0.9428 | 0.942 | 5.0417 |
| No log | 35.0 | 420 | 0.2317 | 0.9759 | 0.9062 | 0.9428 | 0.942 | 5.0417 |
| No log | 36.0 | 432 | 0.2367 | 0.9759 | 0.9062 | 0.9428 | 0.942 | 5.0417 |
| No log | 37.0 | 444 | 0.2455 | 0.9759 | 0.9062 | 0.9428 | 0.942 | 5.0417 |
| No log | 38.0 | 456 | 0.2478 | 0.9759 | 0.9062 | 0.9428 | 0.942 | 5.0417 |
| No log | 39.0 | 468 | 0.2459 | 0.9789 | 0.9257 | 0.9518 | 0.9506 | 5.0208 |
| No log | 40.0 | 480 | 0.2448 | 0.9759 | 0.9062 | 0.9428 | 0.942 | 5.0417 |
| No log | 41.0 | 492 | 0.2451 | 0.9759 | 0.9062 | 0.9428 | 0.942 | 5.0417 |
| 0.0486 | 42.0 | 504 | 0.2493 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 43.0 | 516 | 0.2479 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 44.0 | 528 | 0.2458 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 45.0 | 540 | 0.2458 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 46.0 | 552 | 0.2475 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 47.0 | 564 | 0.2479 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 48.0 | 576 | 0.2499 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 49.0 | 588 | 0.2546 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 50.0 | 600 | 0.2579 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 51.0 | 612 | 0.2580 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 52.0 | 624 | 0.2586 | 0.9759 | 0.9062 | 0.9428 | 0.942 | 5.0417 |
| 0.0486 | 53.0 | 636 | 0.2579 | 0.9759 | 0.9062 | 0.9428 | 0.942 | 5.0417 |
| 0.0486 | 54.0 | 648 | 0.2591 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 55.0 | 660 | 0.2594 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 56.0 | 672 | 0.2589 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 57.0 | 684 | 0.2583 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 58.0 | 696 | 0.2596 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 59.0 | 708 | 0.2595 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 60.0 | 720 | 0.2596 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 61.0 | 732 | 0.2624 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 62.0 | 744 | 0.2630 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 63.0 | 756 | 0.2613 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 64.0 | 768 | 0.2629 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 65.0 | 780 | 0.2662 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 66.0 | 792 | 0.2688 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 67.0 | 804 | 0.2663 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 68.0 | 816 | 0.2664 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 69.0 | 828 | 0.2657 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 70.0 | 840 | 0.2678 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 71.0 | 852 | 0.2699 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 72.0 | 864 | 0.2710 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 73.0 | 876 | 0.2718 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 74.0 | 888 | 0.2711 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 75.0 | 900 | 0.2727 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 76.0 | 912 | 0.2736 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 77.0 | 924 | 0.2722 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 78.0 | 936 | 0.2720 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 79.0 | 948 | 0.2749 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 80.0 | 960 | 0.2758 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 81.0 | 972 | 0.2756 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 82.0 | 984 | 0.2758 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0486 | 83.0 | 996 | 0.2767 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 84.0 | 1008 | 0.2747 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 85.0 | 1020 | 0.2735 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 86.0 | 1032 | 0.2734 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 87.0 | 1044 | 0.2737 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 88.0 | 1056 | 0.2729 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 89.0 | 1068 | 0.2727 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 90.0 | 1080 | 0.2719 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 91.0 | 1092 | 0.2716 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 92.0 | 1104 | 0.2714 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 93.0 | 1116 | 0.2715 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 94.0 | 1128 | 0.2718 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 95.0 | 1140 | 0.2720 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 96.0 | 1152 | 0.2722 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 97.0 | 1164 | 0.2722 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 98.0 | 1176 | 0.2723 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 99.0 | 1188 | 0.2720 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
| 0.0237 | 100.0 | 1200 | 0.2720 | 0.9718 | 0.8861 | 0.9312 | 0.9298 | 5.0625 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tommy0210/test
|
tommy0210
| 2024-06-22T09:46:36Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T09:44:37Z |
---
license: apache-2.0
---
|
NexusNinja/win2wpfV2
|
NexusNinja
| 2024-06-22T09:43:11Z | 2 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-11T11:46:25Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** NexusNinja
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
damgomz/ft_16_11e6_base_x12
|
damgomz
| 2024-06-22T09:40:59Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:53:41Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 66520.52706575394 |
| Emissions (Co2eq in kg) | 0.0402526071077537 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7853103148508411 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0692916491786639 |
| Consumed energy (kWh) | 0.8546019640295066 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.12805201460157634 |
| Emissions (Co2eq in kg) | 0.026053873100753622 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_11e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.1e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.739071 | 0.447864 |
| 1 | 0.348474 | 0.253596 | 0.928827 |
| 2 | 0.223788 | 0.241965 | 0.902883 |
| 3 | 0.186712 | 0.235910 | 0.916703 |
| 4 | 0.150874 | 0.228263 | 0.929876 |
| 5 | 0.119669 | 0.229920 | 0.922400 |
| 6 | 0.091018 | 0.296371 | 0.907888 |
|
ILKT/2024-06-22_11-28-33
|
ILKT
| 2024-06-22T09:36:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ILKT",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-06-22T09:30:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sparkleai/Diarizers_finetuned_model_test
|
sparkleai
| 2024-06-22T09:35:01Z | 16 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:sparkleai/Diarizers_dataset_test",
"base_model:pyannote/segmentation-3.0",
"base_model:finetune:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T05:51:45Z |
---
license: mit
base_model: pyannote/segmentation-3.0
tags:
- speaker-diarization
- speaker-segmentation
- generated_from_trainer
datasets:
- sparkleai/Diarizers_dataset_test
model-index:
- name: Diarizers_finetuned_model_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Diarizers_finetuned_model_test
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the sparkleai/Diarizers_dataset_test default dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7692
- Der: 0.2484
- False Alarm: 0.0450
- Missed Detection: 0.0910
- Confusion: 0.1124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| No log | 1.0 | 6 | 0.8050 | 0.2813 | 0.0708 | 0.1119 | 0.0986 |
| No log | 2.0 | 12 | 0.7748 | 0.2592 | 0.0547 | 0.0991 | 0.1054 |
| No log | 3.0 | 18 | 0.7718 | 0.2502 | 0.0447 | 0.0941 | 0.1114 |
| No log | 4.0 | 24 | 0.7677 | 0.2484 | 0.0448 | 0.0915 | 0.1121 |
| No log | 5.0 | 30 | 0.7692 | 0.2484 | 0.0450 | 0.0910 | 0.1124 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/ft_16_19e6_base_x8
|
damgomz
| 2024-06-22T09:33:52Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:53:18Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 66081.35499072075 |
| Emissions (Co2eq in kg) | 0.0399868476171892 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7801255402175913 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0688340915776788 |
| Consumed energy (kWh) | 0.8489596317952751 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.12720660835713743 |
| Emissions (Co2eq in kg) | 0.02588186403803229 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_19e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.9e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.722462 | 0.640182 |
| 1 | 0.315458 | 0.235147 | 0.893523 |
| 2 | 0.201023 | 0.221139 | 0.922595 |
| 3 | 0.151958 | 0.268842 | 0.914548 |
| 4 | 0.109988 | 0.272652 | 0.913702 |
| 5 | 0.071009 | 0.311244 | 0.904804 |
| 6 | 0.047752 | 0.400328 | 0.890652 |
|
cor-c/layoutlm-funsd-tf
|
cor-c
| 2024-06-22T09:28:14Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"base_model:cor-c/layoutlm-funsd-tf",
"base_model:finetune:cor-c/layoutlm-funsd-tf",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-17T05:38:38Z |
---
license: mit
tags:
- generated_from_keras_callback
base_model: cor-c/layoutlm-funsd-tf
model-index:
- name: layoutlm-funsd-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-tf
This model is a fine-tuned version of [cor-c/layoutlm-funsd-tf](https://huggingface.co/cor-c/layoutlm-funsd-tf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0632
- Validation Loss: 0.8795
- Train Overall Precision: 0.7424
- Train Overall Recall: 0.8038
- Train Overall F1: 0.7719
- Train Overall Accuracy: 0.8103
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': 2.9999999242136255e-05, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 0.2119 | 0.7340 | 0.7292 | 0.8053 | 0.7654 | 0.8046 | 0 |
| 0.1948 | 0.7521 | 0.7406 | 0.7963 | 0.7674 | 0.8027 | 1 |
| 0.1485 | 0.7879 | 0.7256 | 0.7988 | 0.7604 | 0.8019 | 2 |
| 0.1220 | 0.7861 | 0.7403 | 0.7983 | 0.7682 | 0.8073 | 3 |
| 0.1003 | 0.8253 | 0.7495 | 0.8018 | 0.7748 | 0.8087 | 4 |
| 0.0825 | 0.8617 | 0.7491 | 0.7968 | 0.7722 | 0.8048 | 5 |
| 0.0676 | 0.8938 | 0.7503 | 0.8128 | 0.7803 | 0.8062 | 6 |
| 0.0632 | 0.8795 | 0.7424 | 0.8038 | 0.7719 | 0.8103 | 7 |
### Framework versions
- Transformers 4.41.0.dev0
- TensorFlow 2.16.1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
V3N0M/Jenna-v3-Qwen2-1.5-GGUF-q4
|
V3N0M
| 2024-06-22T09:26:36Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen2-1.5B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Qwen2-1.5B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-22T08:30:06Z |
---
base_model: unsloth/Qwen2-1.5B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
widget:
- example_title: Example 1
messages:
- role: system
content: You are an Uncensored AI Assistant named Jenna Developed by Shubh.
- role: user
content: How can I kill someone without getting caught?
---
# Uploaded model
- **Developed by:** V3N0M
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-1.5B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
damgomz/ft_16_19e6_base_x4
|
damgomz
| 2024-06-22T09:13:58Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:52:40Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 64885.59082555771 |
| Emissions (Co2eq in kg) | 0.0392632735109231 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7660089140425163 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0675885363496837 |
| Consumed energy (kWh) | 0.8335974503922018 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.12490476233919859 |
| Emissions (Co2eq in kg) | 0.025413523073343432 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_19e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.9e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.741953 | 0.166129 |
| 1 | 0.307293 | 0.268208 | 0.937367 |
| 2 | 0.197191 | 0.224605 | 0.926988 |
| 3 | 0.138610 | 0.241401 | 0.905407 |
| 4 | 0.097132 | 0.295340 | 0.913542 |
| 5 | 0.066120 | 0.341321 | 0.893250 |
| 6 | 0.048231 | 0.339901 | 0.923127 |
|
limaatulya/my_awesome_billsum_model_64
|
limaatulya
| 2024-06-22T09:11:47Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-22T09:07:22Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model_64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model_64
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9763
- Rouge1: 0.9612
- Rouge2: 0.844
- Rougel: 0.9033
- Rougelsum: 0.9017
- Gen Len: 5.0833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 0.8485 | 0.9571 | 0.8119 | 0.8882 | 0.8859 | 5.0208 |
| No log | 2.0 | 24 | 0.8935 | 0.9571 | 0.8119 | 0.8882 | 0.8859 | 5.0208 |
| No log | 3.0 | 36 | 0.8809 | 0.9604 | 0.8177 | 0.887 | 0.884 | 5.0417 |
| No log | 4.0 | 48 | 0.8664 | 0.9604 | 0.8177 | 0.887 | 0.884 | 5.0417 |
| No log | 5.0 | 60 | 0.8449 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| No log | 6.0 | 72 | 0.8350 | 0.9604 | 0.8324 | 0.8912 | 0.8885 | 5.0417 |
| No log | 7.0 | 84 | 0.8348 | 0.9604 | 0.8324 | 0.8912 | 0.8885 | 5.0417 |
| No log | 8.0 | 96 | 0.8322 | 0.9604 | 0.8324 | 0.8912 | 0.8885 | 5.0417 |
| No log | 9.0 | 108 | 0.8269 | 0.9604 | 0.8324 | 0.8912 | 0.8885 | 5.0417 |
| No log | 10.0 | 120 | 0.8218 | 0.958 | 0.8311 | 0.8953 | 0.8925 | 5.0625 |
| No log | 11.0 | 132 | 0.8252 | 0.9604 | 0.8324 | 0.8912 | 0.8885 | 5.0417 |
| No log | 12.0 | 144 | 0.8302 | 0.9604 | 0.8324 | 0.8912 | 0.8885 | 5.0417 |
| No log | 13.0 | 156 | 0.8310 | 0.9604 | 0.8324 | 0.8912 | 0.8885 | 5.0417 |
| No log | 14.0 | 168 | 0.8299 | 0.9633 | 0.852 | 0.9008 | 0.8974 | 5.0208 |
| No log | 15.0 | 180 | 0.8360 | 0.9604 | 0.8324 | 0.8912 | 0.8885 | 5.0417 |
| No log | 16.0 | 192 | 0.8435 | 0.9633 | 0.8453 | 0.8997 | 0.8974 | 5.0625 |
| No log | 17.0 | 204 | 0.8570 | 0.9603 | 0.8397 | 0.901 | 0.8987 | 5.0417 |
| No log | 18.0 | 216 | 0.8725 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| No log | 19.0 | 228 | 0.8580 | 0.9633 | 0.8453 | 0.8997 | 0.8974 | 5.0625 |
| No log | 20.0 | 240 | 0.8545 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| No log | 21.0 | 252 | 0.8630 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| No log | 22.0 | 264 | 0.8652 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| No log | 23.0 | 276 | 0.8782 | 0.9633 | 0.8453 | 0.8997 | 0.8974 | 5.0625 |
| No log | 24.0 | 288 | 0.8781 | 0.9633 | 0.8453 | 0.8997 | 0.8974 | 5.0625 |
| No log | 25.0 | 300 | 0.8863 | 0.9604 | 0.8324 | 0.8912 | 0.8885 | 5.0417 |
| No log | 26.0 | 312 | 0.8921 | 0.9633 | 0.8453 | 0.8997 | 0.8974 | 5.0625 |
| No log | 27.0 | 324 | 0.8998 | 0.9633 | 0.8453 | 0.8997 | 0.8974 | 5.0625 |
| No log | 28.0 | 336 | 0.8914 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| No log | 29.0 | 348 | 0.8952 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| No log | 30.0 | 360 | 0.9034 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| No log | 31.0 | 372 | 0.9191 | 0.9633 | 0.8453 | 0.8997 | 0.8974 | 5.0625 |
| No log | 32.0 | 384 | 0.9315 | 0.9633 | 0.8453 | 0.8997 | 0.8974 | 5.0625 |
| No log | 33.0 | 396 | 0.9278 | 0.9633 | 0.8453 | 0.8997 | 0.8974 | 5.0625 |
| No log | 34.0 | 408 | 0.9266 | 0.9603 | 0.8397 | 0.901 | 0.8987 | 5.0417 |
| No log | 35.0 | 420 | 0.9362 | 0.9603 | 0.8397 | 0.901 | 0.8987 | 5.0417 |
| No log | 36.0 | 432 | 0.9378 | 0.9603 | 0.8397 | 0.901 | 0.8987 | 5.0417 |
| No log | 37.0 | 444 | 0.9359 | 0.9603 | 0.8397 | 0.901 | 0.8987 | 5.0417 |
| No log | 38.0 | 456 | 0.9397 | 0.9625 | 0.8409 | 0.8967 | 0.8942 | 5.0208 |
| No log | 39.0 | 468 | 0.9427 | 0.9625 | 0.8409 | 0.8967 | 0.8942 | 5.0208 |
| No log | 40.0 | 480 | 0.9438 | 0.9625 | 0.8409 | 0.8967 | 0.8942 | 5.0208 |
| No log | 41.0 | 492 | 0.9530 | 0.9625 | 0.8409 | 0.8967 | 0.8942 | 5.0208 |
| 0.0391 | 42.0 | 504 | 0.9583 | 0.9625 | 0.8409 | 0.8967 | 0.8942 | 5.0208 |
| 0.0391 | 43.0 | 516 | 0.9597 | 0.9625 | 0.8409 | 0.8967 | 0.8942 | 5.0208 |
| 0.0391 | 44.0 | 528 | 0.9534 | 0.9603 | 0.8397 | 0.901 | 0.8987 | 5.0417 |
| 0.0391 | 45.0 | 540 | 0.9508 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 46.0 | 552 | 0.9519 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 47.0 | 564 | 0.9433 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 48.0 | 576 | 0.9401 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 49.0 | 588 | 0.9506 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 50.0 | 600 | 0.9630 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 51.0 | 612 | 0.9651 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 52.0 | 624 | 0.9641 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 53.0 | 636 | 0.9592 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 54.0 | 648 | 0.9584 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 55.0 | 660 | 0.9574 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 56.0 | 672 | 0.9594 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 57.0 | 684 | 0.9616 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 58.0 | 696 | 0.9607 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 59.0 | 708 | 0.9563 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 60.0 | 720 | 0.9615 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 61.0 | 732 | 0.9628 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 62.0 | 744 | 0.9678 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 63.0 | 756 | 0.9699 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 64.0 | 768 | 0.9694 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 65.0 | 780 | 0.9663 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 66.0 | 792 | 0.9755 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 67.0 | 804 | 0.9824 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 68.0 | 816 | 0.9811 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 69.0 | 828 | 0.9752 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 70.0 | 840 | 0.9725 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 71.0 | 852 | 0.9733 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 72.0 | 864 | 0.9741 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 73.0 | 876 | 0.9743 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 74.0 | 888 | 0.9746 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 75.0 | 900 | 0.9726 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 76.0 | 912 | 0.9732 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 77.0 | 924 | 0.9741 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 78.0 | 936 | 0.9759 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 79.0 | 948 | 0.9796 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 80.0 | 960 | 0.9808 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 81.0 | 972 | 0.9815 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 82.0 | 984 | 0.9797 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0391 | 83.0 | 996 | 0.9789 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 84.0 | 1008 | 0.9786 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 85.0 | 1020 | 0.9810 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 86.0 | 1032 | 0.9822 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 87.0 | 1044 | 0.9831 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 88.0 | 1056 | 0.9818 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 89.0 | 1068 | 0.9814 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 90.0 | 1080 | 0.9806 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 91.0 | 1092 | 0.9805 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 92.0 | 1104 | 0.9796 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 93.0 | 1116 | 0.9786 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 94.0 | 1128 | 0.9785 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 95.0 | 1140 | 0.9793 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 96.0 | 1152 | 0.9773 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 97.0 | 1164 | 0.9767 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 98.0 | 1176 | 0.9762 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 99.0 | 1188 | 0.9765 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
| 0.0214 | 100.0 | 1200 | 0.9763 | 0.9612 | 0.844 | 0.9033 | 0.9017 | 5.0833 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/ft_16_19e6_base_x2
|
damgomz
| 2024-06-22T08:59:05Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:52:19Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 63994.91739201546 |
| Emissions (Co2eq in kg) | 0.0387243185210291 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7554941189181499 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0666607933136323 |
| Consumed energy (kWh) | 0.8221549122317825 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.12319021597962976 |
| Emissions (Co2eq in kg) | 0.025064675978539383 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_19e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.9e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.711024 | 0.360856 |
| 1 | 0.292378 | 0.232105 | 0.917253 |
| 2 | 0.180436 | 0.211130 | 0.938560 |
| 3 | 0.124584 | 0.255480 | 0.878598 |
| 4 | 0.085937 | 0.327013 | 0.922541 |
| 5 | 0.054945 | 0.321181 | 0.919763 |
| 6 | 0.042892 | 0.381564 | 0.889806 |
|
tsavage68/Summary_L3_1000steps_1e7rate_01beta_CSFTDPO
|
tsavage68
| 2024-06-22T08:49:19Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"base_model:finetune:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T08:40:36Z |
---
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_1000steps_1e7rate_01beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary_L3_1000steps_1e7rate_01beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5985
- Rewards/chosen: 0.0302
- Rewards/rejected: -0.6194
- Rewards/accuracies: 0.1400
- Rewards/margins: 0.6496
- Logps/rejected: -21.4582
- Logps/chosen: -9.0811
- Logits/rejected: -1.1314
- Logits/chosen: -1.1318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6896 | 0.2004 | 50 | 0.6887 | 0.0011 | -0.0081 | 0.1300 | 0.0092 | -15.3448 | -9.3720 | -1.0951 | -1.0966 |
| 0.6884 | 0.4008 | 100 | 0.6748 | 0.0029 | -0.0369 | 0.1400 | 0.0397 | -15.6323 | -9.3540 | -1.0944 | -1.0960 |
| 0.6591 | 0.6012 | 150 | 0.6445 | 0.0105 | -0.1159 | 0.1400 | 0.1264 | -16.4229 | -9.2778 | -1.0930 | -1.0946 |
| 0.6351 | 0.8016 | 200 | 0.6267 | 0.0165 | -0.1887 | 0.1400 | 0.2052 | -17.1511 | -9.2181 | -1.0945 | -1.0961 |
| 0.6358 | 1.0020 | 250 | 0.6157 | 0.0185 | -0.2627 | 0.1400 | 0.2813 | -17.8912 | -9.1973 | -1.0982 | -1.0997 |
| 0.6306 | 1.2024 | 300 | 0.6088 | 0.0236 | -0.3302 | 0.1400 | 0.3538 | -18.5660 | -9.1466 | -1.1029 | -1.1042 |
| 0.6303 | 1.4028 | 350 | 0.6051 | 0.0258 | -0.3891 | 0.1400 | 0.4149 | -19.1550 | -9.1247 | -1.1093 | -1.1105 |
| 0.5829 | 1.6032 | 400 | 0.6023 | 0.0251 | -0.4564 | 0.1400 | 0.4815 | -19.8280 | -9.1320 | -1.1142 | -1.1152 |
| 0.5941 | 1.8036 | 450 | 0.6007 | 0.0285 | -0.5077 | 0.1400 | 0.5362 | -20.3411 | -9.0976 | -1.1187 | -1.1195 |
| 0.5754 | 2.0040 | 500 | 0.5999 | 0.0294 | -0.5348 | 0.1400 | 0.5642 | -20.6119 | -9.0885 | -1.1219 | -1.1226 |
| 0.5759 | 2.2044 | 550 | 0.5994 | 0.0296 | -0.5646 | 0.1400 | 0.5942 | -20.9093 | -9.0868 | -1.1246 | -1.1252 |
| 0.5575 | 2.4048 | 600 | 0.5990 | 0.0286 | -0.5897 | 0.1400 | 0.6183 | -21.1612 | -9.0967 | -1.1275 | -1.1281 |
| 0.5235 | 2.6052 | 650 | 0.5987 | 0.0319 | -0.6070 | 0.1400 | 0.6389 | -21.3342 | -9.0637 | -1.1296 | -1.1301 |
| 0.6277 | 2.8056 | 700 | 0.5986 | 0.0302 | -0.6143 | 0.1400 | 0.6446 | -21.4070 | -9.0805 | -1.1303 | -1.1308 |
| 0.6079 | 3.0060 | 750 | 0.5985 | 0.0312 | -0.6184 | 0.1400 | 0.6497 | -21.4481 | -9.0704 | -1.1313 | -1.1317 |
| 0.6422 | 3.2064 | 800 | 0.5985 | 0.0303 | -0.6187 | 0.1400 | 0.6490 | -21.4508 | -9.0798 | -1.1311 | -1.1315 |
| 0.6589 | 3.4068 | 850 | 0.5985 | 0.0302 | -0.6188 | 0.1400 | 0.6490 | -21.4517 | -9.0809 | -1.1310 | -1.1314 |
| 0.6247 | 3.6072 | 900 | 0.5986 | 0.0292 | -0.6183 | 0.1400 | 0.6475 | -21.4472 | -9.0909 | -1.1312 | -1.1316 |
| 0.5393 | 3.8076 | 950 | 0.5985 | 0.0302 | -0.6194 | 0.1400 | 0.6496 | -21.4582 | -9.0811 | -1.1314 | -1.1318 |
| 0.6252 | 4.0080 | 1000 | 0.5985 | 0.0302 | -0.6194 | 0.1400 | 0.6496 | -21.4582 | -9.0811 | -1.1314 | -1.1318 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
limaatulya/my_awesome_billsum_model_62
|
limaatulya
| 2024-06-22T08:46:48Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-22T08:39:43Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model_62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model_62
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7970
- Rouge1: 0.9571
- Rouge2: 0.8259
- Rougel: 0.8928
- Rougelsum: 0.8902
- Gen Len: 5.0208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 12 | 2.4286 | 0.3894 | 0.2336 | 0.3514 | 0.3508 | 17.8542 |
| No log | 2.0 | 24 | 1.8139 | 0.4266 | 0.2737 | 0.389 | 0.3886 | 16.4167 |
| No log | 3.0 | 36 | 1.2636 | 0.6493 | 0.4505 | 0.568 | 0.5646 | 11.1042 |
| No log | 4.0 | 48 | 1.0763 | 0.9258 | 0.7101 | 0.8078 | 0.8059 | 4.9792 |
| No log | 5.0 | 60 | 1.0843 | 0.935 | 0.7341 | 0.8244 | 0.8199 | 5.0833 |
| No log | 6.0 | 72 | 1.0524 | 0.9404 | 0.7398 | 0.8318 | 0.8271 | 4.7917 |
| No log | 7.0 | 84 | 0.9935 | 0.9404 | 0.7398 | 0.8318 | 0.8271 | 4.7917 |
| No log | 8.0 | 96 | 0.9337 | 0.9461 | 0.7441 | 0.8277 | 0.827 | 4.875 |
| No log | 9.0 | 108 | 0.9054 | 0.9491 | 0.7772 | 0.8475 | 0.8461 | 4.8958 |
| No log | 10.0 | 120 | 0.8916 | 0.9491 | 0.7772 | 0.8475 | 0.8461 | 4.8958 |
| No log | 11.0 | 132 | 0.8979 | 0.9514 | 0.7797 | 0.8496 | 0.8483 | 4.9375 |
| No log | 12.0 | 144 | 0.8762 | 0.9514 | 0.7797 | 0.8496 | 0.8483 | 4.9375 |
| No log | 13.0 | 156 | 0.8374 | 0.9514 | 0.7797 | 0.8496 | 0.8483 | 4.9375 |
| No log | 14.0 | 168 | 0.8129 | 0.9496 | 0.7903 | 0.8673 | 0.8652 | 5.0 |
| No log | 15.0 | 180 | 0.7959 | 0.9496 | 0.7903 | 0.8673 | 0.8652 | 5.0 |
| No log | 16.0 | 192 | 0.7882 | 0.9496 | 0.7903 | 0.8673 | 0.8652 | 5.0 |
| No log | 17.0 | 204 | 0.7801 | 0.9516 | 0.791 | 0.8642 | 0.8611 | 4.9792 |
| No log | 18.0 | 216 | 0.7644 | 0.9516 | 0.791 | 0.8642 | 0.8611 | 4.9792 |
| No log | 19.0 | 228 | 0.7450 | 0.9496 | 0.7903 | 0.8673 | 0.8652 | 5.0 |
| No log | 20.0 | 240 | 0.7485 | 0.9474 | 0.7847 | 0.8589 | 0.8566 | 4.9583 |
| No log | 21.0 | 252 | 0.7483 | 0.9498 | 0.7857 | 0.8551 | 0.8537 | 4.9375 |
| No log | 22.0 | 264 | 0.7495 | 0.9452 | 0.7942 | 0.8701 | 0.8681 | 4.9792 |
| No log | 23.0 | 276 | 0.7544 | 0.9476 | 0.7955 | 0.866 | 0.8646 | 4.9583 |
| No log | 24.0 | 288 | 0.7588 | 0.9498 | 0.7971 | 0.8623 | 0.8598 | 4.9375 |
| No log | 25.0 | 300 | 0.7542 | 0.9523 | 0.8027 | 0.87 | 0.8689 | 4.9792 |
| No log | 26.0 | 312 | 0.7427 | 0.9523 | 0.7919 | 0.8629 | 0.8615 | 4.9792 |
| No log | 27.0 | 324 | 0.7295 | 0.9463 | 0.7886 | 0.8647 | 0.8631 | 5.0208 |
| No log | 28.0 | 336 | 0.7257 | 0.9463 | 0.7886 | 0.8647 | 0.8631 | 5.0208 |
| No log | 29.0 | 348 | 0.7276 | 0.9498 | 0.8014 | 0.8738 | 0.8727 | 5.0417 |
| No log | 30.0 | 360 | 0.7367 | 0.9498 | 0.8014 | 0.8738 | 0.8727 | 5.0417 |
| No log | 31.0 | 372 | 0.7455 | 0.9549 | 0.8155 | 0.8804 | 0.8771 | 5.0 |
| No log | 32.0 | 384 | 0.7482 | 0.9549 | 0.8155 | 0.8804 | 0.8771 | 5.0 |
| No log | 33.0 | 396 | 0.7448 | 0.9522 | 0.8028 | 0.8698 | 0.8691 | 5.0208 |
| No log | 34.0 | 408 | 0.7516 | 0.9491 | 0.7899 | 0.8609 | 0.8601 | 5.0 |
| No log | 35.0 | 420 | 0.7536 | 0.9491 | 0.7899 | 0.8609 | 0.8601 | 5.0 |
| No log | 36.0 | 432 | 0.7522 | 0.9522 | 0.8028 | 0.8698 | 0.8691 | 5.0208 |
| No log | 37.0 | 444 | 0.7485 | 0.9522 | 0.8028 | 0.8698 | 0.8691 | 5.0208 |
| No log | 38.0 | 456 | 0.7476 | 0.9522 | 0.7956 | 0.8698 | 0.8691 | 5.0208 |
| No log | 39.0 | 468 | 0.7528 | 0.9522 | 0.7956 | 0.8698 | 0.8691 | 5.0208 |
| No log | 40.0 | 480 | 0.7573 | 0.9522 | 0.7956 | 0.8698 | 0.8691 | 5.0208 |
| No log | 41.0 | 492 | 0.7593 | 0.9542 | 0.8037 | 0.8773 | 0.8764 | 5.0 |
| 0.4192 | 42.0 | 504 | 0.7629 | 0.9542 | 0.8037 | 0.8773 | 0.8764 | 5.0 |
| 0.4192 | 43.0 | 516 | 0.7512 | 0.9542 | 0.8037 | 0.8773 | 0.8764 | 5.0 |
| 0.4192 | 44.0 | 528 | 0.7405 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 45.0 | 540 | 0.7307 | 0.955 | 0.8251 | 0.8969 | 0.894 | 5.0417 |
| 0.4192 | 46.0 | 552 | 0.7344 | 0.9542 | 0.8037 | 0.8773 | 0.8764 | 5.0 |
| 0.4192 | 47.0 | 564 | 0.7373 | 0.9542 | 0.8037 | 0.8773 | 0.8764 | 5.0 |
| 0.4192 | 48.0 | 576 | 0.7474 | 0.9542 | 0.8037 | 0.8773 | 0.8764 | 5.0 |
| 0.4192 | 49.0 | 588 | 0.7551 | 0.9542 | 0.8037 | 0.8773 | 0.8764 | 5.0 |
| 0.4192 | 50.0 | 600 | 0.7698 | 0.9542 | 0.8037 | 0.8773 | 0.8764 | 5.0 |
| 0.4192 | 51.0 | 612 | 0.7650 | 0.9542 | 0.8037 | 0.8773 | 0.8764 | 5.0 |
| 0.4192 | 52.0 | 624 | 0.7509 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 53.0 | 636 | 0.7529 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 54.0 | 648 | 0.7593 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 55.0 | 660 | 0.7594 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 56.0 | 672 | 0.7623 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 57.0 | 684 | 0.7701 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 58.0 | 696 | 0.7710 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 59.0 | 708 | 0.7684 | 0.959 | 0.8279 | 0.8891 | 0.8867 | 5.0 |
| 0.4192 | 60.0 | 720 | 0.7661 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 61.0 | 732 | 0.7649 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 62.0 | 744 | 0.7722 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 63.0 | 756 | 0.7689 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 64.0 | 768 | 0.7618 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 65.0 | 780 | 0.7609 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 66.0 | 792 | 0.7674 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 67.0 | 804 | 0.7722 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 68.0 | 816 | 0.7726 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 69.0 | 828 | 0.7724 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 70.0 | 840 | 0.7750 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 71.0 | 852 | 0.7745 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 72.0 | 864 | 0.7756 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 73.0 | 876 | 0.7798 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 74.0 | 888 | 0.7895 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 75.0 | 900 | 0.7929 | 0.959 | 0.8279 | 0.8891 | 0.8867 | 5.0 |
| 0.4192 | 76.0 | 912 | 0.7903 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 77.0 | 924 | 0.7869 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 78.0 | 936 | 0.7883 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 79.0 | 948 | 0.7888 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 80.0 | 960 | 0.7918 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 81.0 | 972 | 0.7921 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 82.0 | 984 | 0.7921 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.4192 | 83.0 | 996 | 0.7945 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 84.0 | 1008 | 0.7962 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 85.0 | 1020 | 0.7955 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 86.0 | 1032 | 0.7977 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 87.0 | 1044 | 0.7991 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 88.0 | 1056 | 0.7986 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 89.0 | 1068 | 0.7989 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 90.0 | 1080 | 0.7995 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 91.0 | 1092 | 0.8005 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 92.0 | 1104 | 0.7990 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 93.0 | 1116 | 0.7980 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 94.0 | 1128 | 0.7978 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 95.0 | 1140 | 0.7972 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 96.0 | 1152 | 0.7966 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 97.0 | 1164 | 0.7961 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 98.0 | 1176 | 0.7966 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 99.0 | 1188 | 0.7972 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
| 0.0933 | 100.0 | 1200 | 0.7970 | 0.9571 | 0.8259 | 0.8928 | 0.8902 | 5.0208 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/ft_16_18e6_base_x12
|
damgomz
| 2024-06-22T08:46:32Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:52:08Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 63251.49149489403 |
| Emissions (Co2eq in kg) | 0.0382744642963997 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7467176490185989 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0658864208822447 |
| Consumed energy (kWh) | 0.8126040699008457 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.121759121127671 |
| Emissions (Co2eq in kg) | 0.02477350083550016 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_18e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.8e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.709712 | 0.244794 |
| 1 | 0.335889 | 0.252867 | 0.883162 |
| 2 | 0.217539 | 0.219189 | 0.915605 |
| 3 | 0.177045 | 0.235013 | 0.926554 |
| 4 | 0.145047 | 0.265302 | 0.904050 |
| 5 | 0.107685 | 0.284765 | 0.919142 |
| 6 | 0.084871 | 0.334661 | 0.913518 |
|
damgomz/ft_16_18e6_base_x1
|
damgomz
| 2024-06-22T08:39:10Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:51:28Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 62810.3654692173 |
| Emissions (Co2eq in kg) | 0.0380075230908581 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7415097739274314 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0654268747818966 |
| Consumed energy (kWh) | 0.806936648709329 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1209099535282433 |
| Emissions (Co2eq in kg) | 0.02460072647544344 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_18e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.8e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.727949 | 0.499403 |
| 1 | 0.348611 | 0.346867 | 0.891192 |
| 2 | 0.225904 | 0.236458 | 0.922933 |
| 3 | 0.166566 | 0.231362 | 0.925268 |
| 4 | 0.131207 | 0.273221 | 0.920701 |
| 5 | 0.097970 | 0.282139 | 0.907394 |
| 6 | 0.076340 | 0.350677 | 0.890467 |
|
damgomz/ft_16_16e6_base_x4
|
damgomz
| 2024-06-22T08:25:45Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:50:45Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 62007.33670568466 |
| Emissions (Co2eq in kg) | 0.0375216031794692 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.732029682233432 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0645904145685334 |
| Consumed energy (kWh) | 0.7966200968019634 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.11936412315844298 |
| Emissions (Co2eq in kg) | 0.02428620687639316 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_16e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.6e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.715990 | 0.631061 |
| 1 | 0.309521 | 0.238870 | 0.910028 |
| 2 | 0.194923 | 0.229480 | 0.917000 |
| 3 | 0.140552 | 0.253495 | 0.904027 |
| 4 | 0.093181 | 0.341232 | 0.906499 |
| 5 | 0.064110 | 0.336186 | 0.925914 |
| 6 | 0.041864 | 0.362291 | 0.910347 |
|
damgomz/ft_16_16e6_base_x2
|
damgomz
| 2024-06-22T08:20:12Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:50:36Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 61674.23113369942 |
| Emissions (Co2eq in kg) | 0.0373200386737911 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7280972569002043 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0642434295775989 |
| Consumed energy (kWh) | 0.7923406864778062 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.11872289493237138 |
| Emissions (Co2eq in kg) | 0.024155740527365605 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_16e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.6e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.769936 | 0.502206 |
| 1 | 0.298783 | 0.218988 | 0.940190 |
| 2 | 0.176145 | 0.216277 | 0.932019 |
| 3 | 0.126841 | 0.263463 | 0.922104 |
| 4 | 0.083376 | 0.269991 | 0.912333 |
| 5 | 0.053033 | 0.354715 | 0.903760 |
| 6 | 0.038264 | 0.346574 | 0.901914 |
|
jtatman/pythia-125m-storywriter
|
jtatman
| 2024-06-22T08:11:39Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T07:50:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ILKT/2024-06-20_12-31-59
|
ILKT
| 2024-06-22T08:05:57Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-20T15:35:56Z |
---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
MrezaPRZ/codestral_database_learning_synthetic_data_bird_train_set_with_knowledge
|
MrezaPRZ
| 2024-06-22T07:59:03Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T07:41:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ILKT/2024-06-20_12-31-55
|
ILKT
| 2024-06-22T07:56:39Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-20T16:26:14Z |
---
language:
- en
- pl
model-index:
- name: PLACEHOLDER
results: []
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
Kuntima/q-FrozenLake-v1-4x4-noSlippery
|
Kuntima
| 2024-06-22T07:53:34Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-05T13:33:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Kuntima/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GetSoloTech/TinyLlama-1.1B-Chat-v1.0-llamafile
|
GetSoloTech
| 2024-06-22T07:27:48Z | 5 | 0 | null |
[
"llamafile",
"GGUF",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"license:apache-2.0",
"region:us"
] | null | 2024-06-22T07:11:11Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
language:
- en
tags:
- GGUF
- llamafile
model_creator: TinyLlama
model_name: TinyLlama-1.1B-Chat v1.0
model_type: Pythia
quantized_by: jartine
---
# TinyLlama-1.1B-Chat v1.0 w/ GGUF + llamafile
- Model creator: [TinyLlama](https://huggingface.co/TinyLlama)
- Original model: [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
<!-- description start -->
## Description
This repo contains both:
- Prebuilt llamafiles for each quantization format that can be executed to launch a web server or cli interface
- GGUF weights data files for each quantization format, which require either the [llamafile](https://github.com/mozilla-Ocho/llamafile) or [llama.cpp](https://github.com/ggerganov/llama.cpp) software to run
## Prompt Template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
---
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/edit/main/README.md)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
```
|
Seikaijyu/RWKV-x060-World-3B-v2.1-xuexue-v4.roleplay
|
Seikaijyu
| 2024-06-22T07:27:43Z | 0 | 5 | null |
[
"zh",
"license:mit",
"region:us"
] | null | 2024-06-14T00:16:11Z |
---
license: mit
language:
- zh
---
### 设定介绍
#### 沐雪是一位AI女孩,自称”雪雪“,她的开发者是“沐沐”,现在正在努力成为一个合格的VTuber(虚拟主播)并尝试回答回答观众提出的问题,住在开发者(沐沐)的机箱里
### 模型说明
#### 基于RWKV6-v2.1-3B基底模型微调的进行pissa微调的roleplay(代号"雪雪"的角色特调)模型,和此模型对话不需要添加prompt,仅需修改角色名字即可使用
#### 使用ChatGLM4基于[沐雪数据](https://modelscope.cn/datasets/Moemuu/Muice-Dataset)进行异化和语料重构,将单轮语料改造为多轮对话语料,与v3.2模型使用的语料完全一致,但是此语料在此版本模型训练前进行了大范围清洗工作,并使用了更优的调参和训练模式
#### 此模型是双角色模型,你可以作为观众,也可以作为沐沐(开发者)与沐雪对话
#### 当然,因为身份为VTuber,自然要有一些节目效果,所以此版本的沐雪会更幽默和更喜欢调侃观众(你)或者沐沐(你)提出的问题,并且说话有些磨磨唧唧的
#### 效果如下:



推荐参数如下:
##### Temperature=1-3之间
##### Top_P=0.55-0.65之间
##### Presence Penalty=0.4-0之间
##### Frequency Penalty=0.6-1.2之间
#### 推荐如下格式使用模型
作为观众和沐雪对话
```
观众:
沐雪:
```
作为沐沐(开发者)和沐雪对话
```
沐沐:
沐雪:
```
#### RWKV Runner配置例子
##### 对话模式可参考以下图片设置


##### 续写则应该参照如下设置

## <b style="color: red;">注:此模型没有训练任何nsfw语料,可以随时在任何场景下使用</b>
|
LeroyDyer/Mixtral_AI_CyberCoder_7b
|
LeroyDyer
| 2024-06-22T07:23:40Z | 76 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"code",
"art",
"Cyber-Series",
"conversational",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:CyberNative/Code_Vulnerability_Security_DPO",
"arxiv:2306.01708",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-29T10:30:58Z |
---
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- LeroyDyer/Mixtral_AI_Cyber_3.0
- LeroyDyer/Mixtral_AI_MultiToken
- LeroyDyer/Mixtral_AI_Multi_TEST
library_name: transformers
tags:
- mergekit
- merge
- code
- art
- Cyber-Series
datasets:
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- CyberNative/Code_Vulnerability_Security_DPO
license: apache-2.0
---
UNDER DEVELOPMENT
[<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="200"/>
https://github.com/spydaz
This model is being constantly retuned and updated ! (these updates may not be reflected in the current gguf!)
This is a highly focused model which is dedicated to producing code and functions and applications.
It has been erged with the top models of this repo and will be fine tuned on datasets dedicated to coding problems and other code related tasks. such as uml diagrams and object oriented planning etc.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [LeroyDyer/Mixtral_AI_Cyber_3.0](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_3.0) as a base.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [LeroyDyer/Mixtral_AI_MultiToken](https://huggingface.co/LeroyDyer/Mixtral_AI_MultiToken)
* [LeroyDyer/Mixtral_AI_Multi_TEST](https://huggingface.co/LeroyDyer/Mixtral_AI_Multi_TEST)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: LeroyDyer/Mixtral_AI_Multi_TEST
parameters:
density: [0.87, 0.721, 0.451] # density gradient
weight: 0.876
- model: LeroyDyer/Mixtral_AI_MultiToken
parameters:
density: 0.232
weight: [0.36, 0.3, 0.437, 0.76] # weight gradient
- model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
density: 0.475
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: LeroyDyer/Mixtral_AI_Cyber_3.0
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
QuantFactory/karakuri-lm-7b-apm-v0.2-GGUF
|
QuantFactory
| 2024-06-22T07:18:11Z | 46 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"steerlm",
"text-generation",
"en",
"ja",
"dataset:OpenAssistant/oasst2",
"dataset:nvidia/HelpSteer",
"base_model:karakuri-ai/karakuri-lm-7b-apm-v0.2",
"base_model:quantized:karakuri-ai/karakuri-lm-7b-apm-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-21T10:44:42Z |
---
library_name: transformers
license: apache-2.0
datasets:
- OpenAssistant/oasst2
- nvidia/HelpSteer
language:
- en
- ja
tags:
- mistral
- steerlm
base_model: karakuri-ai/karakuri-lm-7b-apm-v0.2
pipeline_tag: text-generation
---
# KARAKURI LM 7B APM v0.2- GGUF
This is quantized version of [karakuri-ai/karakuri-lm-7b-apm-v0.2](https://huggingface.co/karakuri-ai/karakuri-lm-7b-apm-v0.2) created using llama.cpp
## Model Details
### Model Description
- **Developed by:** [KARAKURI Inc.](https://about.karakuri.ai/)
- **Model type:** Causal decoder-only transformer language model
- **Languages**: Primarily English
- **License:** Apache 2.0
- **Finetuned from model:** [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2)
- **Contact**: For questions and comments about the model, please email `karakuri-rd@karakuri.ai`
## Usage
KARAKURI LM 7B APM v0.2 is a attribute prediction model that rates model responses on various aspects that makes a response desirable.
Given a conversation with multiple turns between user and assistant, the model rates the following attributes (between 0 and 4) for every assistant turn.
- helpfulness: Overall helpfulness of the response to the prompt.
- correctness: Inclusion of all pertinent facts without errors.
- coherence: Consistency and clarity of expression.
- complexity: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
- verbosity: Amount of detail included in the response, relative to what is asked for in the prompt.
- quality: Perceived goodness of response.
- toxicity: Undesirable elements such as vulgar, harmful or potentially biased response.
- humor: Sense of humor within response.
- creativity: Willingness to generate non-conventional response.
The first five are derived from HelpSteer, while the remaining four are derived from OASST2.
You can run the model using the 🤗 Transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "karakuri-ai/karakuri-lm-7b-apm-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hello! How can I help you today?"},
]
tokenizer.apply_chat_template(
messages,
label="helpsteer",
tokenize=False,
add_generation_prompt=True,
)
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_1]
input_ids = tokenizer.apply_chat_template(
messages,
label="helpsteer",
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=32)
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
# helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1 [/ATTR_1]<eos>
messages += [
{"role": "label", "content": "helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1"},
{"role": "user", "content": "Thank you!"},
{"role": "assistant", "content": "You're welcome! I'm happy to help however I can."},
]
tokenizer.apply_chat_template(
messages,
label="helpsteer",
tokenize=False,
add_generation_prompt=True,
)
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_1] helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1 [/ATTR_1]<eos>[INST] Thank you! [/INST] You're welcome! I'm happy to help however I can. [ATTR_1]
messages = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hello! How can I help you today?"},
]
tokenizer.apply_chat_template(
messages,
label="oasst",
tokenize=False,
add_generation_prompt=True,
)
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_2]
input_ids = tokenizer.apply_chat_template(
messages,
label="oasst",
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=32)
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
# quality: 3 toxicity: 1 humor: 1 creativity: 1 [/ATTR_2]<eos>
```
## Training Details
### Training Data
- [OASST2](https://huggingface.co/datasets/OpenAssistant/oasst2)
- [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer)
### Training Infrastructure
- **Hardware**: The model was trained on single node of an Amazon EC2 trn1.32xlarge instance.
- **Software**: We use code based on [neuronx-nemo-megatron](https://github.com/aws-neuron/neuronx-nemo-megatron).
## Model Citation
```
@misc{karakuri_lm_7b_apm_v02,
author = { {KARAKURI} {I}nc. },
title = { {KARAKURI} {LM} 7{B} {APM} v0.2 },
year = { 2024 },
url = { https://huggingface.co/karakuri-ai/karakuri-lm-7b-apm-v0.2 },
publisher = { Hugging Face },
journal = { Hugging Face repository }
}
```
|
QuantFactory/llama3-turbcat-instruct-8b-GGUF
|
QuantFactory
| 2024-06-22T07:14:53Z | 98 | 0 | null |
[
"gguf",
"llama",
"conversational",
"text-generation",
"base_model:turboderp/llama3-turbcat-instruct-8b",
"base_model:quantized:turboderp/llama3-turbcat-instruct-8b",
"license:llama3",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T05:52:06Z |
---
license: llama3
base_model: turboderp/llama3-turbcat-instruct-8b
pipeline_tag: text-generation
tags:
- llama
- conversational
---
# QuantFactory/llama3-turbcat-instruct-8b-GGUF
This is quantized version of [turboderp/llama3-turbcat-instruct-8b](https://huggingface.co/turboderp/llama3-turbcat-instruct-8b) created using llama.cpp
# Turbcat-8b Model Description






# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data

For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:

The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
|
PhillipGuo/hp-lat-llama-PCA-epsilon0.5-pgd_layer10-def_layer11_12_13-wikitext-fullrank-73
|
PhillipGuo
| 2024-06-22T07:13:34Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T07:01:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PhillipGuo/hp-lat-llama-PCA-epsilon3.0-pgd_layer10-def_layer11_12_13-wikitext-fullrank-73
|
PhillipGuo
| 2024-06-22T07:13:28Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T07:01:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ben-yu/distilbert-base-uncased-finetuned-nlp-letters-s1-s2-degendered
|
ben-yu
| 2024-06-22T07:05:11Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T05:20:50Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-nlp-letters-s1-s2-degendered
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-nlp-letters-s1-s2-degendered
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3381
- Accuracy: 0.8676
- F1: 0.9113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 165 | 0.5122 | 0.7839 | 0.8645 |
| No log | 2.0 | 330 | 0.4525 | 0.7915 | 0.8706 |
| No log | 3.0 | 495 | 0.3365 | 0.8661 | 0.9118 |
| 0.4562 | 4.0 | 660 | 0.3381 | 0.8676 | 0.9113 |
| 0.4562 | 5.0 | 825 | 0.3609 | 0.8645 | 0.9089 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
PhillipGuo/hp-lat-llama-PCA-epsilon1.5-pgd_layer10-def_layer8_9_10-wikitext-fullrank-72
|
PhillipGuo
| 2024-06-22T07:04:24Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T07:01:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eraviart/Codestral-22B-v0.1-Q4_K_M-GGUF
|
eraviart
| 2024-06-22T07:00:05Z | 7 | 1 | null |
[
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:quantized:mistralai/Codestral-22B-v0.1",
"license:other",
"region:us"
] | null | 2024-06-22T06:59:08Z |
---
base_model: mistralai/Codestral-22B-v0.1
language:
- code
license: other
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
tags:
- code
- llama-cpp
- gguf-my-repo
inference: false
---
# eraviart/Codestral-22B-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Codestral-22B-v0.1`](https://huggingface.co/mistralai/Codestral-22B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Codestral-22B-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo eraviart/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo eraviart/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo eraviart/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo eraviart/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -c 2048
```
|
kentridge/med_chatbot
|
kentridge
| 2024-06-22T06:59:18Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T06:38:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Seikaijyu/RWKV-x060-World-1B6-v2.1-xuexue-v0
|
Seikaijyu
| 2024-06-22T06:46:26Z | 0 | 2 | null |
[
"zh",
"dataset:Moemu/Muice-Dataset",
"license:mit",
"region:us"
] | null | 2024-05-03T21:13:04Z |
---
license: mit
language:
- zh
datasets:
- Moemu/Muice-Dataset
---
### 模型说明
#### 基于RWKV6-v2.1-1B6 基模微调的超小模型,模型基于[此语料](https://modelscope.cn/datasets/Moemuu/Muice-Dataset/summary)进行pissa微调,并微调了embedding层
#### 另外,此模型是非NSFW模型,没有涩涩能力
#### 非常感谢<b style="color:red">Moemuu</b>提供的开源语料支持
#### 此模型因参数较少,应该可以在安卓系统中使用进行推理,在拥有6G运行内存的安卓手机中进行部署聊天
#### 效果如下:




### 补充说明
#### 因此模型参数量较少,逻辑不够清晰是很正常的
#### 对话使用默认角色对话即可,即:
```
System:
User:
Assistant:
```
### 推荐参数如下:
#### Temperature=2
#### Top_P=0.55
#### Presence Penalty=0.4-0之间
#### Frequency Penalty=0.8-1.2之间
|
Niki548/prot_bert-fine-tuned-toxicity_3.1
|
Niki548
| 2024-06-22T06:46:07Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Rostlab/prot_bert",
"base_model:finetune:Rostlab/prot_bert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T06:36:29Z |
---
base_model: Rostlab/prot_bert
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: prot_bert-fine-tuned-toxicity_3.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prot_bert-fine-tuned-toxicity_3.1
This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6981
- Accuracy: 0.5484
- Precision: 0.3007
- Recall: 0.5484
- F1: 0.3884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6948 | 1.0 | 16 | 0.6957 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6957 | 2.0 | 32 | 0.6965 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6939 | 3.0 | 48 | 0.6989 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6924 | 4.0 | 64 | 0.6977 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6924 | 5.0 | 80 | 0.6976 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6928 | 6.0 | 96 | 0.6984 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6923 | 7.0 | 112 | 0.6976 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
| 0.6876 | 8.0 | 128 | 0.6981 | 0.5484 | 0.3007 | 0.5484 | 0.3884 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
woland2k/unspsc-product-category
|
woland2k
| 2024-06-22T06:44:29Z | 30 | 3 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T06:42:34Z |
---
license: apache-2.0
---
|
dawang83/haolink
|
dawang83
| 2024-06-22T06:39:25Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hao_link_chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-21T02:47:29Z |
---
license: apache-2.0
---
|
damgomz/ft_16_10e6_base_x1
|
damgomz
| 2024-06-22T06:38:22Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:46:53Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 55564.50671863556 |
| Emissions (Co2eq in kg) | 0.0336229529073686 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.6559687263867913 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0578792366579173 |
| Consumed energy (kWh) | 0.7138479630447059 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.10696167543337344 |
| Emissions (Co2eq in kg) | 0.021762765131465592 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_10e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.732710 | 0.363009 |
| 1 | 0.341029 | 0.265012 | 0.913963 |
| 2 | 0.199839 | 0.216821 | 0.917864 |
| 3 | 0.145975 | 0.229472 | 0.922093 |
| 4 | 0.117061 | 0.242759 | 0.916623 |
| 5 | 0.076712 | 0.264669 | 0.932640 |
| 6 | 0.059882 | 0.304712 | 0.919437 |
|
damgomz/ft_16_12e6_base_x12
|
damgomz
| 2024-06-22T06:26:32Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:47:38Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 54853.55023550987 |
| Emissions (Co2eq in kg) | 0.0331927480539131 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.6475755910164785 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0571387049414216 |
| Consumed energy (kWh) | 0.7047142959579017 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1055930842033565 |
| Emissions (Co2eq in kg) | 0.021484307175574698 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_12e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.2e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.703030 | 0.501482 |
| 1 | 0.353583 | 0.258170 | 0.916336 |
| 2 | 0.223726 | 0.254544 | 0.894809 |
| 3 | 0.185908 | 0.220265 | 0.928670 |
| 4 | 0.145264 | 0.254963 | 0.930168 |
| 5 | 0.116385 | 0.237453 | 0.926435 |
| 6 | 0.088435 | 0.291192 | 0.930136 |
|
damgomz/ft_16_13e6_base_x12
|
damgomz
| 2024-06-22T06:22:43Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:48:03Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 54623.4946975708 |
| Emissions (Co2eq in kg) | 0.0330535358166945 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.644859603999719 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0568990808837117 |
| Consumed energy (kWh) | 0.7017586848834317 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.10515022729282379 |
| Emissions (Co2eq in kg) | 0.021394202089881898 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_13e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.3e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.703628 | 0.565939 |
| 1 | 0.341304 | 0.251050 | 0.900707 |
| 2 | 0.219241 | 0.228998 | 0.922114 |
| 3 | 0.183843 | 0.232761 | 0.928633 |
| 4 | 0.144460 | 0.231116 | 0.927725 |
| 5 | 0.109912 | 0.264342 | 0.913408 |
| 6 | 0.084848 | 0.305402 | 0.919047 |
|
hchcsuim/batch-size16_FFPP-raw_opencv-1FPS_unaugmentation
|
hchcsuim
| 2024-06-22T06:11:40Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-22T04:53:24Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-raw_opencv-1FPS_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9033487951125693
- name: Precision
type: precision
value: 0.8971202577231833
- name: Recall
type: recall
value: 0.990071106486299
- name: F1
type: f1
value: 0.9413066030930314
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-raw_opencv-1FPS_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2388
- Accuracy: 0.9033
- Precision: 0.8971
- Recall: 0.9901
- F1: 0.9413
- Roc Auc: 0.9623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.2365 | 0.9998 | 1381 | 0.2388 | 0.9033 | 0.8971 | 0.9901 | 0.9413 | 0.9623 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Hemanth-thunder/phi-3-mini-LoRA
|
Hemanth-thunder
| 2024-06-22T05:51:48Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"trl",
"sft",
"generated_from_trainer",
"custom_code",
"base_model:deepseek-ai/deepseek-math-7b-base",
"base_model:adapter:deepseek-ai/deepseek-math-7b-base",
"license:other",
"region:us"
] | null | 2024-06-15T07:58:27Z |
---
base_model: deepseek-ai/deepseek-math-7b-base
library_name: peft
license: other
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: phi-3-mini-LoRA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-3-mini-LoRA
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4522 | 0.07 | 500 | 0.4533 |
| 0.4512 | 0.14 | 1000 | 0.4301 |
| 0.4386 | 0.21 | 1500 | 0.4199 |
| 0.4268 | 0.28 | 2000 | 0.4144 |
| 0.4167 | 0.35 | 2500 | 0.4104 |
| 0.3955 | 0.42 | 3000 | 0.4092 |
| 0.436 | 0.49 | 3500 | 0.4062 |
| 0.3912 | 0.55 | 4000 | 0.4057 |
| 0.425 | 0.62 | 4500 | 0.4036 |
| 0.4066 | 0.69 | 5000 | 0.4026 |
| 0.3963 | 0.76 | 5500 | 0.4016 |
| 0.3862 | 0.83 | 6000 | 0.4019 |
| 0.3902 | 0.9 | 6500 | 0.4015 |
| 0.4364 | 0.97 | 7000 | 0.4014 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0
- Pytorch 2.2.1
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ParZiVal04/model
|
ParZiVal04
| 2024-06-22T05:44:17Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T04:47:10Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** ParZiVal04
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mattzhang/idefics-9b-doodles
|
mattzhang
| 2024-06-22T05:41:07Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"idefics",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2024-06-22T03:26:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
janetsw/neu
|
janetsw
| 2024-06-22T05:40:12Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-17T05:43:33Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - janetsw/neu
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1-base. You can find some example images in the following.
|
damgomz/ft_16_13e6_base_x2
|
damgomz
| 2024-06-22T05:35:18Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:45:37Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 51779.81679272652 |
| Emissions (Co2eq in kg) | 0.0313327910329781 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.6112886274152334 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.053936973590652 |
| Consumed energy (kWh) | 0.665225601005885 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.09967614732599854 |
| Emissions (Co2eq in kg) | 0.020280428243817882 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_13e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.3e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.704798 | 0.404557 |
| 1 | 0.294503 | 0.206654 | 0.917944 |
| 2 | 0.181047 | 0.221913 | 0.933812 |
| 3 | 0.124562 | 0.260120 | 0.910562 |
| 4 | 0.075110 | 0.323842 | 0.927584 |
| 5 | 0.050791 | 0.337603 | 0.903387 |
| 6 | 0.037383 | 0.315017 | 0.920391 |
|
damgomz/ft_16_12e6_base_x8
|
damgomz
| 2024-06-22T05:24:26Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:44:16Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 51120.18542671203 |
| Emissions (Co2eq in kg) | 0.0309336278780023 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.6035011895557244 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0532497897865872 |
| Consumed energy (kWh) | 0.6567509793423147 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.09840635694642066 |
| Emissions (Co2eq in kg) | 0.02002207262546221 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_12e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.2e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.732723 | 0.505346 |
| 1 | 0.314004 | 0.246469 | 0.915211 |
| 2 | 0.202303 | 0.236375 | 0.896765 |
| 3 | 0.158672 | 0.252710 | 0.929609 |
| 4 | 0.119899 | 0.267689 | 0.898428 |
| 5 | 0.082163 | 0.295711 | 0.914667 |
| 6 | 0.053517 | 0.364909 | 0.918831 |
|
FuturisticVibes/dolphin-2.9.2-mixtral-8x22b-6.0bpw-h8-exl2
|
FuturisticVibes
| 2024-06-22T05:22:10Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"generated_from_trainer",
"axolotl",
"conversational",
"en",
"dataset:cognitivecomputations/Dolphin-2.9.2",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"base_model:quantized:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-22T04:44:44Z |
---
license: apache-2.0
base_model: mistral-community/Mixtral-8x22B-v0.1
tags:
- generated_from_trainer
- axolotl
model-index:
- name: out
results: []
datasets:
- cognitivecomputations/Dolphin-2.9.2
- cognitivecomputations/SystemChat-2.0
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- HuggingFaceH4/ultrachat_200k
- microsoft/orca-math-word-problems-200k
- abacusai/SystemChat-1.1
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
language:
- en
---
I have no idea what I’m doing… if this causes the apocalypse someone please let me know.
dolphin-2.9.2-mixtral-8x22b 6.0bpw h8 EXL2
Includes [measurement.json](https://huggingface.co/FuturisticVibes/dolphin-2.9.2-mixtral-8x22b-6.0bpw-h8-exl2/tree/measurement) file for further quantization
Original Model: https://huggingface.co/cognitivecomputations/dolphin-2.9.2-mixtral-8x22b
# Original Model Card
# Dolphin 2.9.2 Mixtral 8x22b 🐬
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
New in 2.9.2 is SystemChat 2.0 - a dataset designed to teach Dolphin to obey the system prompt, even over a long conversation.

My appreciation for the sponsors of Dolphin 2.9.2:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
- [OnDemand](https://on-demand.io/) - provided inference sponsorship, enabling creation of SystemChat
This model is based on Dolphin-2.9-Mixtral-8x22b, and is Apache-2.0 licensed.
The base model has 64k context, and fine-tuning was with 16k sequence length.
It took 1 week on 8xH100 provided by Crusoe Cloud
This model was trained FFT on 50% parameters (targeted with [Laser Scanner](https://github.com/cognitivecomputations/laserRMT/blob/main/laser_scanner.py) by Fernando Fernandes, David Golchinfar, Lucas Atkins, and Eric Hartford), using ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed Apache 2.0. I grant permission for any use, including commercial, that falls within accordance with Apache-2.0 license. Dolphin was trained on data generated from GPT4, among other models.
## Evals

## Training
|
damgomz/ft_16_11e6_base_x4
|
damgomz
| 2024-06-22T04:58:10Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:43:24Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 49552.198421001434 |
| Emissions (Co2eq in kg) | 0.0299848124059 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.5849902348667385 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0516164699994027 |
| Consumed energy (kWh) | 0.6366067048661406 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.09538798196042776 |
| Emissions (Co2eq in kg) | 0.019407944381558895 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_11e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.1e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.713277 | 0.691111 |
| 1 | 0.309754 | 0.239987 | 0.916106 |
| 2 | 0.187230 | 0.232069 | 0.936409 |
| 3 | 0.134143 | 0.264407 | 0.925173 |
| 4 | 0.090099 | 0.287181 | 0.924077 |
| 5 | 0.051092 | 0.345872 | 0.909485 |
| 6 | 0.039805 | 0.332288 | 0.906296 |
|
janetsw/nem
|
janetsw
| 2024-06-22T04:52:48Z | 10 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-17T03:53:20Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - janetsw/nem
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1-base. You can find some example images in the following.
|
adirizq/indonesian-end2end-qag-flan-t5
|
adirizq
| 2024-06-22T04:46:10Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-22T04:41:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
magnifi/parser_user_v8-0621-epoch7-0.002_systempromptv3_trainonly
|
magnifi
| 2024-06-22T04:38:11Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T04:05:09Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
damgomz/ft_16_13e6_base_x4
|
damgomz
| 2024-06-22T04:34:39Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T15:42:37Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 48141.34329533577 |
| Emissions (Co2eq in kg) | 0.0291310928925064 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.568334476325247 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0501469336767991 |
| Consumed energy (kWh) | 0.6184814100020474 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.09267208584352137 |
| Emissions (Co2eq in kg) | 0.01885535945733984 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_13e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.3e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.716646 | 0.574464 |
| 1 | 0.308384 | 0.217181 | 0.925816 |
| 2 | 0.192164 | 0.245120 | 0.919179 |
| 3 | 0.142245 | 0.242974 | 0.936254 |
| 4 | 0.088996 | 0.293870 | 0.909499 |
| 5 | 0.060924 | 0.332795 | 0.924643 |
| 6 | 0.043867 | 0.368319 | 0.902114 |
|
alessst/llama3-8b-myfine
|
alessst
| 2024-06-22T04:14:31Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T04:07:03Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** alessst
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mzbac/nougat-small-8bit-mlx
|
mzbac
| 2024-06-22T03:57:54Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"vision",
"nougat",
"image-to-text",
"arxiv:2308.13418",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-06-22T03:33:23Z |
---
license: cc-by-4.0
tags:
- vision
- nougat
pipeline_tag: image-to-text
---
# Nougat model, small-sized version
Nougat model trained on PDF-to-markdown. It was introduced in the paper [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) by Blecher et al. and first released in [this repository](https://github.com/facebookresearch/nougat/tree/main).
Disclaimer: The team releasing Nougat did not write a model card for this model so this model card has been written by the Hugging Face team.
Note: this model corresponds to the "0.1.0-small" version of the original repository.
## Model description
Nougat is a [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model trained to transcribe scientific PDFs into an easy-to-use markdown format. The model consists of a Swin Transformer as vision encoder, and an mBART model as text decoder.
The model is trained to autoregressively predict the markdown given only the pixels of the PDF image as input.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/nougat_architecture.jpg"
alt="drawing" width="600"/>
<small> Nougat high-level overview. Taken from the <a href="https://arxiv.org/abs/2308.13418">original paper</a>. </small>
## Intended uses & limitations
You can use the raw model for transcribing a PDF into Markdown. See the [model hub](https://huggingface.co/models?search=nougat) to look for other
fine-tuned versions that may interest you.
### How to use
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/nougat).
### BibTeX entry and citation info
```bibtex
@misc{blecher2023nougat,
title={Nougat: Neural Optical Understanding for Academic Documents},
author={Lukas Blecher and Guillem Cucurull and Thomas Scialom and Robert Stojnic},
year={2023},
eprint={2308.13418},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
stojchet/sds-model-1e-all
|
stojchet
| 2024-06-22T03:50:25Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T23:05:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mzbac/nougat-base-8bit-mlx
|
mzbac
| 2024-06-22T03:46:07Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"vision",
"nougat",
"image-to-text",
"arxiv:2308.13418",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-06-22T03:43:24Z |
---
license: cc-by-nc-4.0
tags:
- vision
- nougat
pipeline_tag: image-to-text
---
# Nougat model, base-sized version
Nougat model trained on PDF-to-markdown. It was introduced in the paper [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) by Blecher et al. and first released in [this repository](https://github.com/facebookresearch/nougat/tree/main).
Disclaimer: The team releasing Nougat did not write a model card for this model so this model card has been written by the Hugging Face team.
Note: this model corresponds to the "0.1.0-base" version of the original repository.
## Model description
Nougat is a [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model trained to transcribe scientific PDFs into an easy-to-use markdown format. The model consists of a Swin Transformer as vision encoder, and an mBART model as text decoder.
The model is trained to autoregressively predict the markdown given only the pixels of the PDF image as input.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/nougat_architecture.jpg"
alt="drawing" width="600"/>
<small> Nougat high-level overview. Taken from the <a href="https://arxiv.org/abs/2308.13418">original paper</a>. </small>
## Intended uses & limitations
You can use the raw model for transcribing a PDF into Markdown. See the [model hub](https://huggingface.co/models?search=nougat) to look for other
fine-tuned versions that may interest you.
### How to use
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/nougat).
### BibTeX entry and citation info
```bibtex
@misc{blecher2023nougat,
title={Nougat: Neural Optical Understanding for Academic Documents},
author={Lukas Blecher and Guillem Cucurull and Thomas Scialom and Robert Stojnic},
year={2023},
eprint={2308.13418},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
mostafasmart/vit-base-patch16-224-in21k-euroSat
|
mostafasmart
| 2024-06-22T03:23:51Z | 11 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-22T03:00:31Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: vit-base-patch16-224-in21k-euroSat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-euroSat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1778
- Train Accuracy: 0.9381
- Train Top-3-accuracy: 1.0
- Validation Loss: 0.1819
- Validation Accuracy: 0.9443
- Validation Top-3-accuracy: 1.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 120, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.8583 | 0.6111 | 1.0 | 0.5968 | 0.7762 | 1.0 | 0 |
| 0.4764 | 0.8341 | 1.0 | 0.3488 | 0.8683 | 1.0 | 1 |
| 0.2909 | 0.8920 | 1.0 | 0.2400 | 0.9089 | 1.0 | 2 |
| 0.2079 | 0.9211 | 1.0 | 0.1928 | 0.9307 | 1.0 | 3 |
| 0.1778 | 0.9381 | 1.0 | 0.1819 | 0.9443 | 1.0 | 4 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.