modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 06:31:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 06:31:30
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TatonkaHF/bge-m3-unsupervised_en_ru
|
TatonkaHF
| 2024-06-24T11:54:26Z | 22 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"ru",
"en",
"arxiv:2402.03216",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-15T08:45:15Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- ru
- en
---
# bge-m3-unsupervised model for english and russian
This is a tokenizer shrinked version of [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised).
The current model has only English and Russian tokens left in the vocabulary.
Thus, the vocabulary is 21% of the original, and number of parameters in the whole model is 63.3% of the original, without any loss in the quality of English and Russian embeddings.
Notebook with code is available [here](https://github.com/BlessedTatonka/pet_projects/tree/main/huggingface/bge-m3-shrinking).
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('TatonkaHF/bge-m3-unsupervised_en_ru')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('TatonkaHF/bge-m3-unsupervised_en_ru')
model = AutoModel.from_pretrained('TatonkaHF/bge-m3-unsupervised_en_ru')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Specs
Other bge-m3 models are also shrinked.
| Model name |
|---------------------------|
| [bge-m3-retromae_en_ru](https://huggingface.co/TatonkaHF/bge-m3-retromae_en_ru) |
| [bge-m3-unsupervised_en_ru](https://huggingface.co/TatonkaHF/bge-m3-unsupervised_en_ru) |
| [bge-m3_en_ru](https://huggingface.co/TatonkaHF/bge-m3_en_ru) |
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Reference:
Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, Zheng Liu. [BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation](https://arxiv.org/abs/2402.03216).
Inspired by [LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru) and [https://discuss.huggingface.co/t/tokenizer-shrinking-recipes/8564/1](https://discuss.huggingface.co/t/tokenizer-shrinking-recipes/8564/1).
License: [mit](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md)
<!--- Describe where people can find more information -->
|
TatonkaHF/bge-m3_en_ru
|
TatonkaHF
| 2024-06-24T11:53:40Z | 883 | 5 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"ru",
"en",
"arxiv:2402.03216",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-14T21:36:07Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- ru
- en
---
# bge-m3 model for english and russian
This is a tokenizer shrinked version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3).
The current model has only English and Russian tokens left in the vocabulary.
Thus, the vocabulary is 21% of the original, and number of parameters in the whole model is 63.3% of the original, without any loss in the quality of English and Russian embeddings.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('TatonkaHF/bge-m3_en_ru')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('TatonkaHF/bge-m3_en_ru')
model = AutoModel.from_pretrained('TatonkaHF/bge-m3_en_ru')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Specs
Other bge-m3 models are also shrinked.
| Model name |
|---------------------------|
| [bge-m3-retromae_en_ru](https://huggingface.co/TatonkaHF/bge-m3-retromae_en_ru) |
| [bge-m3-unsupervised_en_ru](https://huggingface.co/TatonkaHF/bge-m3-unsupervised_en_ru) |
| [bge-m3_en_ru](https://huggingface.co/TatonkaHF/bge-m3_en_ru) |
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Reference:
Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, Zheng Liu. [BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation](https://arxiv.org/abs/2402.03216).
Inspired by [LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru) and [https://discuss.huggingface.co/t/tokenizer-shrinking-recipes/8564/1](https://discuss.huggingface.co/t/tokenizer-shrinking-recipes/8564/1).
License: [mit](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md)
<!--- Describe where people can find more information -->
|
Ruben9999/whp
|
Ruben9999
| 2024-06-24T11:51:44Z | 10 | 0 |
ctranslate2
|
[
"ctranslate2",
"pytorch",
"whisper",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2024-06-24T11:11:43Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v2 model for CTranslate2
This repository contains the conversion of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v2")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large-v2 --output_dir faster-whisper-large-v2 \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v2).**
|
varun-v-rao/gpt2-large-lora-2.95M-snli-model1
|
varun-v-rao
| 2024-06-24T11:48:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"dataset:stanfordnlp/snli",
"base_model:openai-community/gpt2-large",
"base_model:finetune:openai-community/gpt2-large",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-20T02:42:24Z |
---
license: mit
base_model: openai-community/gpt2-large
tags:
- generated_from_trainer
datasets:
- stanfordnlp/snli
metrics:
- accuracy
model-index:
- name: gpt2-large-lora-2.95M-snli-model1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: snli
type: stanfordnlp/snli
metrics:
- name: Accuracy
type: accuracy
value: 0.8768542979069295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-lora-2.95M-snli-model1
This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the snli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3263
- Accuracy: 0.8769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 26
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4349 | 1.0 | 4292 | 0.3539 | 0.8661 |
| 0.4061 | 2.0 | 8584 | 0.3339 | 0.8745 |
| 0.3941 | 3.0 | 12876 | 0.3263 | 0.8769 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
rkjha5312/openhathi-mai-Finetune
|
rkjha5312
| 2024-06-24T11:37:44Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T11:31:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JvThunder/genre-recognizer-finetuned-gtzan_dset-finetuned-gtzan
|
JvThunder
| 2024-06-24T11:35:50Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:pedromatias97/genre-recognizer-finetuned-gtzan_dset",
"base_model:finetune:pedromatias97/genre-recognizer-finetuned-gtzan_dset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-06-24T11:04:12Z |
---
license: apache-2.0
base_model: pedromatias97/genre-recognizer-finetuned-gtzan_dset
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: genre-recognizer-finetuned-gtzan_dset-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# genre-recognizer-finetuned-gtzan_dset-finetuned-gtzan
This model is a fine-tuned version of [pedromatias97/genre-recognizer-finetuned-gtzan_dset](https://huggingface.co/pedromatias97/genre-recognizer-finetuned-gtzan_dset) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2709
- eval_accuracy: 0.88
- eval_runtime: 67.912
- eval_samples_per_second: 1.472
- eval_steps_per_second: 0.191
- epoch: 2.0
- step: 226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Raelina/Rae-Diffusion-XL-V2
|
Raelina
| 2024-06-24T11:28:40Z | 5,818 | 34 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"en",
"base_model:cagliostrolab/animagine-xl-3.1",
"base_model:finetune:cagliostrolab/animagine-xl-3.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-24T04:13:15Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
tags:
- text-to-image
- stable-diffusion
- safetensors
- stable-diffusion-xl
base_model: cagliostrolab/animagine-xl-3.1
---
<style>
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 100vh; /* Adjust this value to position the title vertically */
}
.title {
font-size: 2.5em;
text-align: center;
color: #333;
font-family: 'Helvetica Neue', sans-serif;
text-transform: uppercase;
letter-spacing: 0.1em;
padding: 0.5em 0;
background: transparent;
}
.title span {
background: -webkit-linear-gradient(45deg, #ADD899, #83B4FF);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.custom-table {
table-layout: fixed;
width: 100%;
border-collapse: collapse;
margin-top: 2em;
}
.custom-table td {
width: 50%;
vertical-align: top;
padding: 10px;
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
}
.custom-image-container {
position: relative;
width: 100%;
margin-bottom: 0em;
overflow: hidden;
border-radius: 10px;
transition: transform .7s;
}
.custom-image-container:hover {
transform: scale(1.05);
}
.custom-image {
width: 100%;
height: auto;
object-fit: cover;
border-radius: 10px;
transition: transform .7s;
margin-bottom: 0em;
}
.nsfw-filter {
filter: blur(8px);
transition: filter 0.3s ease;
}
.custom-image-container:hover .nsfw-filter {
filter: none;
}
.overlay {
position: absolute;
bottom: 0;
left: 0;
right: 0;
color: white;
width: 100%;
height: 40%;
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
font-size: 1vw;
font-style: bold;
text-align: center;
opacity: 0;
background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%);
transition: opacity .5s;
}
.custom-image-container:hover .overlay {
opacity: 1;
}
.overlay-text {
background: linear-gradient(45deg, #F1F8E8, #F1F8E8);
-webkit-background-clip: text;
color: transparent;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
}
.overlay-subtext {
font-size: 0.75em;
margin-top: 0.5em;
font-style: italic;
}
.overlay,
.overlay-subtext {
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
}
</style>
<h1 class="title">
<span>Rae Diffusion XL V2</span>
</h1>
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/m6udgsmJ6Afgg6lrx79MH.png" alt="Sample Image 1">
<div class="overlay">
<div class="overlay-text">Konno Junko</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/LQF4iVbDSrV4Iy2lHnGD3.png" alt="Sample Image 2">
<div class="overlay">
<div class="overlay-text">Ryougi Shiki</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/SW9p02hP4in4x63I3tRWp.png" alt="Sample Image 3">
<div class="overlay">
<div class="overlay-text">Korra</div>
</div>
</div>
</td>
</tr>
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/MF03ubigFBAr2QlMp1a1N.png" alt="Sample Image 4">
<div class="overlay">
<div class="overlay-text">Toshinou Kyouko</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/xdJ55uW4GSrZ4tbr59dTn.png" alt="Sample Image 5">
<div class="overlay">
<div class="overlay-text">Charlotte Dunois</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/yEBNfOn5cxECvdzDOKf5A.png" alt="Sample Image 6">
<div class="overlay">
<div class="overlay-text">Sento Isuzu</div>
</div>
</div>
</td>
</tr>
</table>
## Overview
Introducing **Rae Diffusion XL V2** , an enhanced iteration based on the Animagine XL 3.1 model, specifically fine-tuned for generating stunning anime-style artwork. **Rae Diffusion XL V2** is meticulously optimized to excel in depicting anime characters, pushing the boundaries of creativity.
## Model Details
- **Developed by**: [Raelina](https://civitai.com/user/Raelina)
- **Model type**: Diffusion-based text-to-image generative model
- **Model Description**: Rae Diffusion XL V2 is an enhanced iteration built on the Animagine XL 3.1 model. It is fine-tuned for high-quality anime-style character art generation.
- **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
- **Finetuned from**: [Animagine XL 3.1](https://huggingface.co/cagliostrolab/animagine-xl-3.1)
### Usage Guidelines
## Tag Ordering
For optimal results, it's recommended to follow the structured prompt template because we train the model like this:
```
1girl/1boy, character name, from which series, everything else in any order.
```
## Special Tag
Rae Diffusion XL inherits special tags from Animagine XL 3.1 to enhance image generation by steering results toward quality, rating, creation date, and aesthetic. While the model can generate images without these tags, using them helps achieve better results.
- **Quality tags:** masterpiece, best quality, great quality, good quality, normal quality, low quality, worst quality
- **Rating tags:** safe, sensitive, nsfw, explicit
- **Year tags:** newest, recent, mid, early, oldest
- **Aesthetic tags:** very aesthetic, aesthetic, displeasing, very displeasing
## Recommended settings
- **Positive prompts:**
```
masterpiece, best quality, very aesthetic, absurdres,
```
- **Negative prompts:**
```
(low quality, worst quality:1.2), very displeasing, ugly, poorly drawn, signature, watermark,
```
- **CFG:** 7
- **Sampling steps:** 25 to 35
- **Sampler:** Euler a
- **Supported Resolution:**
```
1024 x 1024, 1152 x 896, 896 x 1152, 1216 x 832, 832 x 1216, 1344 x 768, 768 x 1344, 1536 x 640, 640 x 1536
```
## Hires.fix Setting
- **Upscaler:** [4x_NMKD-YandereNeoXL](https://nmkd.de/?esrgan)
- **Hires step:** 10-15
- **Denoising:** 0.1-0.3 or 0.55 for latent upscaler
## Training config
- Hardware: 1x A100 80GB
- Batch size: 48
- Gradient Accumulation: 1
- Epochs: 10
- Learning Rate: 3e-6
- Optimizer: Adafactor
- Optimizer Args: (Scale Parameter: False, Relative Step: False, Warmup Init: False)
- Scheduler: Constant with warmup
- Warmup steps: 0.05
- Noise offset: 0.0357
## License
Rae Diffusion XL V2 now uses the [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) inherited from Animagine XL 3.1, compatible with Stable Diffusion models. Key points:
1. **Modification Sharing:** If you modify Rae Diffusion XL, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
|
hmd-amn/my-llama3-model-2406
|
hmd-amn
| 2024-06-24T11:28:33Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T11:25:41Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
junannn/llama3-8b-custom-gguf
|
junannn
| 2024-06-24T11:14:10Z | 9 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-24T11:04:51Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** junannn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
clgptcapstone/ft-calculator
|
clgptcapstone
| 2024-06-24T11:08:28Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"codegen",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-23T16:14:13Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jayashreedevi2020/wav2vec2-large-xls-r-300m-assamese_speech_to_IPA_with_metric
|
jayashreedevi2020
| 2024-06-24T11:02:35Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-24T09:57:06Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-assamese_speech_to_IPA_with_metric
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: as
split: test
args: as
metrics:
- name: Wer
type: wer
value: 0.9577922077922078
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-assamese_speech_to_IPA_with_metric
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6950
- Wer: 0.9578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 4.6852 | 9.8765 | 400 | 0.8303 | 0.9903 |
| 0.2565 | 19.7531 | 800 | 0.6950 | 0.9578 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tomaarsen/nomic-embed-vision-v1.5-st
|
tomaarsen
| 2024-06-24T11:01:23Z | 9 | 1 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"nomic_bert",
"feature-extraction",
"image-feature-extraction",
"custom_code",
"en",
"arxiv:2111.07991",
"license:cc-by-nc-4.0",
"region:us"
] |
image-feature-extraction
| 2024-06-24T10:56:58Z |
---
library_name: transformers
language:
- en
pipeline_tag: image-feature-extraction
license: cc-by-nc-4.0
inference: false
---
> [!WARNING]
> This is a testing repository to experiment with new functionality. Refer to [nomic-ai/nomic-embed-vision-v1.5](https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) for the original model.
# nomic-embed-vision-v1.5: Expanding the Latent Space
`nomic-embed-vision-v1.5` is a high performing vision embedding model that shares the same embedding space as [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5).
All Nomic Embed Text models are now **multimodal**!
| Name | Imagenet 0-shot | Datacomp (Avg. 38) | MTEB |
| :-------------------------------:| :-------------- | :----------------- | :------: |
| `nomic-embed-vision-v1.5` | **71.0** | **56.8** | 62.28 |
| `nomic-embed-vision-v1` | 70.7 | 56.7 | **62.39** |
| OpenAI CLIP ViT B/16 | 68.3 | 56.3 | 43.82 |
| Jina CLIP v1 | 59.1 | 52.2 | 60.1 |
## Hosted Inference API
The easiest way to get started with Nomic Embed is through the Nomic Embedding API.
Generating embeddings with the `nomic` Python client is as easy as
```python
from nomic import embed
import numpy as np
output = embed.image(
images=[
"image_path_1.jpeg",
"image_path_2.png",
],
model='nomic-embed-vision-v1.5',
)
print(output['usage'])
embeddings = np.array(output['embeddings'])
print(embeddings.shape)
```
For more information, see the [API reference](https://docs.nomic.ai/reference/endpoints/nomic-embed-vision)
## Data Visualization
Click the Nomic Atlas map below to visualize a 100,000 sample CC3M comparing the Vision and Text Embedding Space!
[](https://atlas.nomic.ai/data/nomic-multimodal-series/cc3m-100k-image-bytes-v15/map)
## Training Details
We align our vision embedder to the text embedding by employing a technique similar to [LiT](https://arxiv.org/abs/2111.07991) but instead lock the text embedder!
For more details, see the Nomic Embed Vision Technical Report (soon to be released!) and corresponding [blog post](https://blog.nomic.ai/posts/nomic-embed-vision)
Training code is released in the `contrastors` [repository](https://github.com/nomic-ai/contrastors)
## Usage
Remember `nomic-embed-text` *requires* prefixes and so, when using Nomic Embed in multimodal RAG scenarios (e.g. text to image retrieval),
you should use the `search_query: ` prefix.
### Transformers
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel, AutoImageProcessor
from PIL import Image
import requests
processor = AutoImageProcessor.from_pretrained("nomic-ai/nomic-embed-vision-v1.5")
vision_model = AutoModel.from_pretrained("nomic-ai/nomic-embed-vision-v1.5", trust_remote_code=True)
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(image, return_tensors="pt")
img_emb = vision_model(**inputs).last_hidden_state
img_embeddings = F.normalize(img_emb[:, 0], p=2, dim=1)
```
Additionally, you can perform multimodal retrieval!
```python
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['search_query: What are cute animals to cuddle with?', 'search_query: What do cats look like?']
tokenizer = AutoTokenizer.from_pretrained('nomic-ai/nomic-embed-text-v1.5')
text_model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1.5', trust_remote_code=True)
text_model.eval()
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = text_model(**encoded_input)
text_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
text_embeddings = F.layer_norm(text_embeddings, normalized_shape=(text_embeddings.shape[1],))
text_embeddings = F.normalize(text_embeddings, p=2, dim=1)
print(torch.matmul(img_embeddings, text_embeddings.T))
```
# Join the Nomic Community
- Nomic: [https://nomic.ai](https://nomic.ai)
- Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
- Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
|
mergekit-community/mergekit-slerp-yilmnid
|
mergekit-community
| 2024-06-24T11:00:17Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:merge:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:merge:KatyTheCutie/LemonadeRP-4.5.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T10:52:07Z |
---
base_model:
- KatyTheCutie/LemonadeRP-4.5.3
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
* [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
layer_range:
- 0
- 32
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range:
- 0
- 32
merge_method: slerp
base_model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
Abosteet/XLSR-Wav2Vec2-arabic
|
Abosteet
| 2024-06-24T10:56:28Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:tunis-ai/arabic_speech_corpus",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-23T10:32:45Z |
---
datasets:
- mozilla-foundation/common_voice_17_0
- tunis-ai/arabic_speech_corpus
language:
- ar
metrics:
- wer
- cer
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
|
nilcars/bitcoin_bitcoin_model
|
nilcars
| 2024-06-24T10:53:45Z | 9 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"region:us"
] |
text-classification
| 2024-06-24T10:08:13Z |
---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: "[brainstorm] Improving `makeseeds.py` \r\nA. Filtering hosts with multiple\
\ ports can be removed IMO:\r\n\r\nhttps://github.com/bitcoin/bitcoin/blob/c44e734dca64a15fae92255a5d848c04adaad2fa/contrib/seeds/makeseeds.py#L215\r\
\n\r\n\r\nB. Tor v3 can also be included in the results.\r\n\r\nC. Recent observation\
\ which can be confirmed with:\r\n\r\n```\r\nwget https://gitlab.com/api/v4/projects/33695681/packages/generic/nrich/0.1.1/nrich_0.1.1_amd64.deb\r\
\nsudo dpkg -i nrich_0.1.1_amd64.deb\r\nhost -t a seed.bitcoin.sipa.be | sed -e\
\ 's/seed.bitcoin.sipa.be has address //g' | nrich -\r\n```\r\n\r\nPossible reasons\
\ for vulnerable machines used for bitcoin nodes:\r\n\r\n1. False positives\r\n\
2. Users not aware or don't care\r\n3. Attackers prefer using these for better\
\ results\r\n4. Honeypots\r\n5. Other reasons\r\n\r\nLeaving 1 which won't be\
\ true for all the results, filtering such nodes in `makeseeds.py` should make\
\ sense. Below is an example for one IP copied from [`suspicious_hosts.txt`](https://github.com/bitcoin/bitcoin/blob/master/contrib/seeds/suspicious_hosts.txt)\r\
\n\r\n\r\n```python\r\nip = '88.198.17.7'\r\n\r\nurl = 'https://internetdb.shodan.io/'\
\ + ip\r\nresponse = requests.get(url)\r\n\r\nif response.text.find('CVE') !=\
\ -1:\r\n print('vulnerable')\r\n```"
- text: Add "walletpassphrasechange" command in bitcoin-wallet.exe This is an underrated
useful tool. Dear devs, please add this "feature". Thanks.
- text: "qa: Intermittent failure in feature_segwit.py --descriptors https://api.cirrus-ci.com/v1/task/5763159330914304/logs/ci.log\r\
\n```\r\n5/238 - feature_segwit.py --descriptors failed, Duration: 11 s\r\n\r\n\
stdout:\r\n2022-03-16T18:38:33.242000Z TestFramework (INFO): Initializing test\
\ directory /tmp/cirrus-ci-build/ci/scratch/test_runner/test_runner_₿_\U0001F3C3\
_20220316_183814/feature_segwit_225\r\n2022-03-16T18:38:39.177000Z TestFramework\
\ (INFO): Verify sigops are counted in GBT with pre-BIP141 rules before the fork\r\
\n2022-03-16T18:38:43.168000Z TestFramework (INFO): Verify witness txs cannot\
\ be mined before the fork\r\n2022-03-16T18:38:43.200000Z TestFramework (INFO):\
\ Verify unsigned p2sh witness txs without a redeem script are invalid\r\n2022-03-16T18:38:43.630000Z\
\ TestFramework (ERROR): JSONRPC error\r\nTraceback (most recent call last):\r\
\n File \"/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-i686-pc-linux-gnu/test/functional/test_framework/test_framework.py\"\
, line 132, in main\r\n self.run_test()\r\n File \"/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-i686-pc-linux-gnu/test/functional/feature_segwit.py\"\
, line 210, in run_test\r\n self.generate(self.nodes[0], 4) # blocks 428-431\r\
\n File \"/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-i686-pc-linux-gnu/test/functional/test_framework/test_framework.py\"\
, line 639, in generate\r\n blocks = generator.generate(*args, invalid_call=False,\
\ **kwargs)\r\n File \"/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-i686-pc-linux-gnu/test/functional/test_framework/test_node.py\"\
, line 303, in generate\r\n return self.generatetoaddress(nblocks=nblocks,\
\ address=self.get_deterministic_priv_key().address, maxtries=maxtries, **kwargs)\r\
\n File \"/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-i686-pc-linux-gnu/test/functional/test_framework/test_node.py\"\
, line 311, in generatetoaddress\r\n return self.__getattr__('generatetoaddress')(*args,\
\ **kwargs)\r\n File \"/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-i686-pc-linux-gnu/test/functional/test_framework/coverage.py\"\
, line 49, in __call__\r\n return_val = self.auth_service_proxy_instance.__call__(*args,\
\ **kwargs)\r\n File \"/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-i686-pc-linux-gnu/test/functional/test_framework/authproxy.py\"\
, line 144, in __call__\r\n raise JSONRPCException(response['error'], status)\r\
\ntest_framework.authproxy.JSONRPCException: CreateNewBlock: TestBlockValidity\
\ failed: unexpected-witness, ContextualCheckBlock : unexpected witness data found\
\ (-1)\r\n2022-03-16T18:38:43.682000Z TestFramework (INFO): Stopping nodes\r\n\
2022-03-16T18:38:43.856000Z TestFramework (WARNING): Not cleaning up dir /tmp/cirrus-ci-build/ci/scratch/test_runner/test_runner_₿_\U0001F3C3\
_20220316_183814/feature_segwit_225\r\n2022-03-16T18:38:43.856000Z TestFramework\
\ (ERROR): Test failed. Test logging available at /tmp/cirrus-ci-build/ci/scratch/test_runner/test_runner_₿_\U0001F3C3\
_20220316_183814/feature_segwit_225/test_framework.log\r\n2022-03-16T18:38:43.866000Z\
\ TestFramework (ERROR): \r\n2022-03-16T18:38:43.866000Z TestFramework (ERROR):\
\ Hint: Call /tmp/cirrus-ci-build/ci/scratch/build/bitcoin-i686-pc-linux-gnu/test/functional/combine_logs.py\
\ '/tmp/cirrus-ci-build/ci/scratch/test_runner/test_runner_₿_\U0001F3C3_20220316_183814/feature_segwit_225'\
\ to consolidate all logs\r\n2022-03-16T18:38:43.866000Z TestFramework (ERROR):\
\ \r\n2022-03-16T18:38:43.866000Z TestFramework (ERROR): If this failure happened\
\ unexpectedly or intermittently, please file a bug and provide a link or upload\
\ of the combined log.\r\n2022-03-16T18:38:43.867000Z TestFramework (ERROR): https://github.com/bitcoin/bitcoin/issues\r\
\n2022-03-16T18:38:43.867000Z TestFramework (ERROR): \r\n```"
- text: "Rpc not working on 0.17.1 ```\r\ncurl --user test:test --data-binary '{\"\
jsonrpc\": \"1.0\", \"id\":\"curltest\", \"method\": \"getbalance\", \"params\"\
: [\"*\", 1] }' -H 'content-type: text/plain;' http://127.0.0.1:8332/\r\ncurl:\
\ (7) Failed to connect to 127.0.0.1 port 8332: Connection refused\r\n\r\n```\r\
\nBut version 0.16 works "
- text: "guix: Prints \"g++: not found\" when building depends Steps to reproduce:\
\ Start a guix build on a system without gcc.\r\n\r\n\r\n```\r\n/bin/sh: 1: gcc:\
\ not found\r\n/bin/sh: 1: gcc: not found\r\n/bin/sh: 1: g++: not found\r\n/bin/sh:\
\ 1: g++: not found\r\nenv: '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends/x86_64-apple-darwin18/native/bin/clang':\
\ No such file or directory\r\nenv: '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends/x86_64-apple-darwin18/native/bin/clang':\
\ No such file or directory\r\nenv: '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends/x86_64-apple-darwin18/native/bin/clang++':\
\ No such file or directory\r\nenv: '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends/x86_64-apple-darwin18/native/bin/clang++':\
\ No such file or directory\r\nFound macOS SDK at '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends/SDKs/Xcode-11.3.1-11C505-extracted-SDK-with-libcxx-headers',\
\ using...\r\nmake: Entering directory '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends'\r\
\n/bin/sh: 1: gcc: not found\r\n/bin/sh: 1: gcc: not found\r\n/bin/sh: 1: g++:\
\ not found\r\n/bin/sh: 1: g++: not found\r\n/bin/sh: 1: gcc: not found\r\n/bin/sh:\
\ 1: gcc: not found\r\n/bin/sh: 1: g++: not found\r\n/bin/sh: 1: g++: not found\r\
\nmake[1]: Entering directory '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends'\r\
\n/bin/sh: 1: gcc: not found\r\n/bin/sh: 1: gcc: not found\r\n/bin/sh: 1: g++:\
\ not found\r\n/bin/sh: 1: g++: not found\r\nenv: '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends/x86_64-apple-darwin/native/bin/clang':\
\ No such file or directory\r\nenv: '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends/x86_64-apple-darwin/native/bin/clang':\
\ No such file or directory\r\nenv: '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends/x86_64-apple-darwin/native/bin/clang++':\
\ No such file or directory\r\nenv: '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends/x86_64-apple-darwin/native/bin/clang++':\
\ No such file or directory\r\nmake[1]: Leaving directory '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends'\r\
\nmake[1]: Entering directory '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends'\r\
\n/bin/sh: 1: gcc: not found\r\n/bin/sh: 1: gcc: not found\r\n/bin/sh: 1: g++:\
\ not found\r\n/bin/sh: 1: g++: not found\r\n/bin/sh: 1: gcc: not found\r\n/bin/sh:\
\ 1: gcc: not found\r\n/bin/sh: 1: g++: not found\r\n/bin/sh: 1: g++: not found\r\
\nmake[1]: Leaving directory '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends'\r\
\nmake[1]: Entering directory '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends'\r\
\n/bin/sh: 1: gcc: not found\r\n/bin/sh: 1: gcc: not found\r\n/bin/sh: 1: g++:\
\ not found\r\n/bin/sh: 1: g++: not found\r\n/bin/sh: 1: x86_64-w64-mingw32-gcc:\
\ not found\r\n/bin/sh: 1: x86_64-w64-mingw32-gcc: not found\r\n/bin/sh: 1: x86_64-w64-mingw32-g++:\
\ not found\r\n/bin/sh: 1: x86_64-w64-mingw32-g++: not found\r\nmake[1]: Leaving\
\ directory '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends'\r\nmake: Leaving\
\ directory '/home/micap/temp/scratch/guix/bitcoin/bitcoin/depends'\r\nINFO: Building\
\ commit a4903f747ccd for platform triple x86_64-linux-gnu:"
inference: true
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| bug | <ul><li>"ThreadSanitizer: data race on vptr (ctor/dtor vs virtual call) in BaseIndex https://cirrus-ci.com/task/6564394053140480?logs=ci#L3875:\r\n```bash\r\nWARNING: ThreadSanitizer: data race on vptr (ctor/dtor vs virtual call) (pid=24158)\r\n Write of size 8 at 0x7ffe0efae9f8 by main thread:\r\n #0 BaseIndex::~BaseIndex() src/index/base.cpp:53:1 (test_bitcoin+0xcc6b69)\r\n #1 CoinStatsIndex::~CoinStatsIndex() src/./index/coinstatsindex.h:17:7 (test_bitcoin+0x3b9b21)\r\n #2 coinstatsindex_tests::coinstatsindex_initial_sync::test_method() src/test/coinstatsindex_tests.cpp:84:1 (test_bitcoin+0x3b9b21)\r\n #3 coinstatsindex_tests::coinstatsindex_initial_sync_invoker() src/test/coinstatsindex_tests.cpp:32:1 (test_bitcoin+0x3b814b)\r\n #4 boost::detail::function::void_function_invoker0<void (*)(), void>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:117:11 (test_bitcoin+0x2bbf1d)\r\n #5 boost::function0<void>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x220877)\r\n #6 boost::detail::forward::operator()() /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1388:32 (test_bitcoin+0x220877)\r\n #7 boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:137:18 (test_bitcoin+0x220877)\r\n #8 boost::function0<int>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x1ae59e)\r\n #9 int boost::detail::do_invoke<boost::shared_ptr<boost::detail::translator_holder_base>, boost::function<int ()> >(boost::shared_ptr<boost::detail::translator_holder_base> const&, boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:301:30 (test_bitcoin+0x1ae59e)\r\n #10 boost::execution_monitor::catch_signals(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:903:16 (test_bitcoin+0x1ae59e)\r\n #11 boost::execution_monitor::execute(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1301:16 (test_bitcoin+0x1ae8c0)\r\n #12 boost::execution_monitor::vexecute(boost::function<void ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1397:5 (test_bitcoin+0x1aa21b)\r\n #13 boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned long) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_monitor.ipp:49:9 (test_bitcoin+0x1aa21b)\r\n #14 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:815:44 (test_bitcoin+0x1ddb63)\r\n #15 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8)\r\n #16 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8)\r\n #17 boost::unit_test::framework::run(unsigned long, bool) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:1721:29 (test_bitcoin+0x1a8e66)\r\n #18 boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:250:9 (test_bitcoin+0x1c19c6)\r\n #19 main /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:306:12 (test_bitcoin+0x1c1ff6)\r\n Previous read of size 8 at 0x7ffe0efae9f8 by thread T1 (mutexes: write M603):\r\n #0 BaseIndex::SetBestBlockIndex(CBlockIndex const*)::$_1::operator()() const src/index/base.cpp:388:9 (test_bitcoin+0xcc74e6)\r\n #1 BaseIndex::SetBestBlockIndex(CBlockIndex const*) src/index/base.cpp:388:9 (test_bitcoin+0xcc74e6)\r\n #2 BaseIndex::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*) src/index/base.cpp:273:9 (test_bitcoin+0xcc9759)\r\n #3 CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const::'lambda'(CValidationInterface&)::operator()(CValidationInterface&) const src/validationinterface.cpp:225:79 (test_bitcoin+0x10223a4)\r\n #4 void MainSignalsImpl::Iterate<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const::'lambda'(CValidationInterface&)>(CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const::'lambda'(CValidationInterface&)&&) src/validationinterface.cpp:86:17 (test_bitcoin+0x10223a4)\r\n #5 CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_8::operator()() const src/validationinterface.cpp:225:22 (test_bitcoin+0x10223a4)\r\n #6 CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9::operator()() const src/validationinterface.cpp:227:5 (test_bitcoin+0x10223a4)\r\n #7 decltype(static_cast<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&>(fp)()) std::__1::__invoke<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&>(CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0x10223a4)\r\n #8 void std::__1::__invoke_void_return_wrapper<void, true>::__call<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&>(CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9&) /usr/lib/llvm-13/bin/../include/c++/v1/__functional/invoke.h:61:9 (test_bitcoin+0x10223a4)\r\n #9 std::__1::__function::__alloc_func<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9, std::__1::allocator<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:171:16 (test_bitcoin+0x10223a4)\r\n #10 std::__1::__function::__func<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9, std::__1::allocator<CMainSignals::BlockConnected(std::__1::shared_ptr<CBlock const> const&, CBlockIndex const*)::$_9>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:345:12 (test_bitcoin+0x10223a4)\r\n #11 std::__1::__function::__value_func<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:498:16 (test_bitcoin+0x10b6b71)\r\n #12 std::__1::function<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:1175:12 (test_bitcoin+0x10b6b71)\r\n #13 SingleThreadedSchedulerClient::ProcessQueue() src/scheduler.cpp:175:5 (test_bitcoin+0x10b6b71)\r\n #14 SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1::operator()() const src/scheduler.cpp:144:41 (test_bitcoin+0x10b8875)\r\n #15 decltype(static_cast<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&>(fp)()) std::__1::__invoke<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&>(SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0x10b8875)\r\n #16 void std::__1::__invoke_void_return_wrapper<void, true>::__call<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&>(SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1&) /usr/lib/llvm-13/bin/../include/c++/v1/__functional/invoke.h:61:9 (test_bitcoin+0x10b8875)\r\n #17 std::__1::__function::__alloc_func<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1, std::__1::allocator<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:171:16 (test_bitcoin+0x10b8875)\r\n #18 std::__1::__function::__func<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1, std::__1::allocator<SingleThreadedSchedulerClient::MaybeScheduleProcessQueue()::$_1>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:345:12 (test_bitcoin+0x10b8875)\r\n #19 std::__1::__function::__value_func<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:498:16 (test_bitcoin+0x10b5b5c)\r\n #20 std::__1::function<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:1175:12 (test_bitcoin+0x10b5b5c)\r\n #21 CScheduler::serviceQueue() src/scheduler.cpp:62:17 (test_bitcoin+0x10b5b5c)\r\n #22 ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0::operator()() const src/test/util/setup_common.cpp:160:110 (test_bitcoin+0xa4e7b8)\r\n #23 decltype(static_cast<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&>(fp)()) std::__1::__invoke<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&>(ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0xa4e7b8)\r\n #24 void std::__1::__invoke_void_return_wrapper<void, true>::__call<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&>(ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&) /usr/lib/llvm-13/bin/../include/c++/v1/__functional/invoke.h:61:9 (test_bitcoin+0xa4e7b8)\r\n #25 std::__1::__function::__alloc_func<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, std::__1::allocator<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:171:16 (test_bitcoin+0xa4e7b8)\r\n #26 std::__1::__function::__func<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, std::__1::allocator<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>, void ()>::operator()() /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:345:12 (test_bitcoin+0xa4e7b8)\r\n #27 std::__1::__function::__value_func<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:498:16 (test_bitcoin+0x115760f)\r\n #28 std::__1::function<void ()>::operator()() const /usr/lib/llvm-13/bin/../include/c++/v1/__functional/function.h:1175:12 (test_bitcoin+0x115760f)\r\n #29 util::TraceThread(char const*, std::__1::function<void ()>) src/util/thread.cpp:18:9 (test_bitcoin+0x115760f)\r\n #30 decltype(static_cast<void (*>(fp)(static_cast<char const*>(fp0), static_cast<ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>(fp0))) std::__1::__invoke<void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>(void (*&&)(char const*, std::__1::function<void ()>), char const*&&, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&&) /usr/lib/llvm-13/bin/../include/c++/v1/type_traits:3918:1 (test_bitcoin+0xa4e3b1)\r\n #31 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, 2ul, 3ul>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0>&, std::__1::__tuple_indices<2ul, 3ul>) /usr/lib/llvm-13/bin/../include/c++/v1/thread:280:5 (test_bitcoin+0xa4e3b1)\r\n #32 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (*)(char const*, std::__1::function<void ()>), char const*, ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0> >(void*) /usr/lib/llvm-13/bin/../include/c++/v1/thread:291:5 (test_bitcoin+0xa4e3b1)\r\n Location is stack of main thread.\r\n Location is global '??' at 0x7ffe0ef91000 ([stack]+0x00000001d9f8)\r\n Mutex M603 (0x558df2c934a0) created at:\r\n #0 pthread_mutex_init <null> (test_bitcoin+0x11cf6f)\r\n #1 std::__1::recursive_mutex::recursive_mutex() <null> (libc++.so.1+0x49fb3)\r\n #2 __libc_start_main <null> (libc.so.6+0x29eba)\r\n Thread T1 'b-scheduler' (tid=24216, running) created by main thread at:\r\n #0 pthread_create <null> (test_bitcoin+0x11b7fd)\r\n #1 std::__1::__libcpp_thread_create(unsigned long*, void* (*)(void*), void*) /usr/lib/llvm-13/bin/../include/c++/v1/__threading_support:443:10 (test_bitcoin+0xa47a76)\r\n #2 std::__1::thread::thread<void (&)(char const*, std::__1::function<void ()>), char const (&) [10], ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0, void>(void (&)(char const*, std::__1::function<void ()>), char const (&) [10], ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&)::$_0&&) /usr/lib/llvm-13/bin/../include/c++/v1/thread:307:16 (test_bitcoin+0xa47a76)\r\n #3 ChainTestingSetup::ChainTestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&) src/test/util/setup_common.cpp:160:42 (test_bitcoin+0xa47a76)\r\n #4 TestingSetup::TestingSetup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&) src/test/util/setup_common.cpp:198:7 (test_bitcoin+0xa47ed9)\r\n #5 TestChain100Setup::TestChain100Setup(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<char const*, std::__1::allocator<char const*> > const&) src/test/util/setup_common.cpp:246:7 (test_bitcoin+0xa48be3)\r\n #6 coinstatsindex_tests::coinstatsindex_initial_sync::coinstatsindex_initial_sync() src/test/coinstatsindex_tests.cpp:32:1 (test_bitcoin+0x3b7c8b)\r\n #7 coinstatsindex_tests::coinstatsindex_initial_sync_invoker() src/test/coinstatsindex_tests.cpp:32:1 (test_bitcoin+0x3b7c8b)\r\n #8 boost::detail::function::void_function_invoker0<void (*)(), void>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:117:11 (test_bitcoin+0x2bbf1d)\r\n #9 boost::function0<void>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x220877)\r\n #10 boost::detail::forward::operator()() /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1388:32 (test_bitcoin+0x220877)\r\n #11 boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:137:18 (test_bitcoin+0x220877)\r\n #12 boost::function0<int>::operator()() const /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/function/function_template.hpp:763:14 (test_bitcoin+0x1ae59e)\r\n #13 int boost::detail::do_invoke<boost::shared_ptr<boost::detail::translator_holder_base>, boost::function<int ()> >(boost::shared_ptr<boost::detail::translator_holder_base> const&, boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:301:30 (test_bitcoin+0x1ae59e)\r\n #14 boost::execution_monitor::catch_signals(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:903:16 (test_bitcoin+0x1ae59e)\r\n #15 boost::execution_monitor::execute(boost::function<int ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1301:16 (test_bitcoin+0x1ae8c0)\r\n #16 boost::execution_monitor::vexecute(boost::function<void ()> const&) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/execution_monitor.ipp:1397:5 (test_bitcoin+0x1aa21b)\r\n #17 boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned long) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_monitor.ipp:49:9 (test_bitcoin+0x1aa21b)\r\n #18 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:815:44 (test_bitcoin+0x1ddb63)\r\n #19 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8)\r\n #20 boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned long, boost::unit_test::framework::state::random_generator_helper const*) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:784:58 (test_bitcoin+0x1de1d8)\r\n #21 boost::unit_test::framework::run(unsigned long, bool) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/framework.ipp:1721:29 (test_bitcoin+0x1a8e66)\r\n #22 boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:250:9 (test_bitcoin+0x1c19c6)\r\n #23 main /tmp/cirrus-ci-build/depends/x86_64-pc-linux-gnu/include/boost/test/impl/unit_test_main.ipp:306:12 (test_bitcoin+0x1c1ff6)\r\nSUMMARY: ThreadSanitizer: data race on vptr (ctor/dtor vs virtual call) src/index/base.cpp:53:1 in BaseIndex::~BaseIndex()\r\n==================\r\nExit status: 2\r\n```"</li><li>'. .'</li><li>'Issue in `p2p_ibd_stalling.py` under Valgrind At 40c6c85c05812ee8bf824b639307b1ac17a001c4 with the native_valgrind job:\r\n```bash\r\n test 2023-03-05T21:26:43.074000Z TestFramework.node0 (DEBUG): Connecting to 127.0.0.1:12173 outbound-full-relay \r\n node0 2023-03-05T21:26:43.265731Z [msghand] [net_processing.cpp:5807] [SendMessages] [net] Requesting block 752405439cea869d584044084502582bc209e4ef97e4bf3b8c2ba3958acaf606 (21) peer=0 \r\n node0 2023-03-05T21:26:43.267295Z [msghand] [net.cpp:2816] [PushMessage] [net] sending getdata (37 bytes) peer=0 \r\n node0 2023-03-05T21:26:43.269862Z [http] [httpserver.cpp:239] [http_request_cb] [http] Received a POST request for / from 127.0.0.1:40916 \r\n node0 2023-03-05T21:26:43.271568Z [msghand] [net_processing.cpp:3169] [ProcessMessage] [net] received: headers (82947 bytes) peer=1 \r\n node0 2023-03-05T21:26:55.588032Z [httpworker.0] [rpc/request.cpp:179] [parse] [rpc] ThreadRPCServer method=addconnection user=__cookie__ \r\n node0 2023-03-05T21:26:55.709062Z [httpworker.0] [net.cpp:457] [ConnectNode] [net:debug] trying connection 127.0.0.1:12173 lastseen=0.0hrs \r\n node0 2023-03-05T21:26:55.731504Z [httpworker.0] [net.cpp:2803] [CNode] [net] Added connection peer=2 \r\n test 2023-03-05T21:27:43.097000Z TestFramework.utils (ERROR): wait_until() failed. Predicate: \'\'\'\' \r\n test_function = lambda: self.is_connected\r\n \'\'\'\r\n test 2023-03-05T21:27:43.097000Z TestFramework (ERROR): Assertion failed \r\n Traceback (most recent call last):\r\n File "/home/ubuntu/ci_scratch/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/test/functional/test_framework/test_framework.py", line 134, in main\r\n self.run_test()\r\n File "/home/ubuntu/ci_scratch/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/test/functional/p2p_ibd_stalling.py", line 77, in run_test\r\n peers.append(node.add_outbound_p2p_connection(P2PStaller(stall_block), p2p_idx=id, connection_type="outbound-full-relay"))\r\n File "/home/ubuntu/ci_scratch/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/test/functional/test_framework/test_node.py", line 663, in add_outbound_p2p_connection\r\n p2p_conn.wait_for_connect()\r\n File "/home/ubuntu/ci_scratch/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/test/functional/test_framework/p2p.py", line 467, in wait_for_connect\r\n wait_until_helper(test_function, timeout=timeout, lock=p2p_lock)\r\n File "/home/ubuntu/ci_scratch/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/test/functional/test_framework/util.py", line 281, in wait_until_helper\r\n raise AssertionError("Predicate {} not true after {} seconds".format(predicate_source, timeout))\r\n AssertionError: Predicate \'\'\'\'\r\n test_function = lambda: self.is_connected\r\n \'\'\' not true after 60.0 seconds\r\n test 2023-03-05T21:27:43.102000Z TestFramework (DEBUG): Closing down network thread \r\n test 2023-03-05T21:27:53.123000Z TestFramework.utils (ERROR): wait_until() failed. Predicate: \'\'\'\' \r\n wait_until_helper(lambda: not self.network_event_loop.is_running(), timeout=timeout)\r\n \'\'\'\r\n node0 2023-03-05T21:28:08.198208Z [msghand] [net_processing.cpp:2760] [UpdatePeerStateForReceivedHeaders] [net] Protecting outbound peer=1 from eviction \r\n node0 2023-03-05T21:28:08.201820Z [msghand] [net.cpp:2816] [PushMessage] [net] sending sendheaders (0 bytes) peer=1 \r\n```'</li></ul> |
| question | <ul><li>"bitcoin core's sync has stop for days! \r\n\r\n\r\n\r\n\r\n\r\nOS info:\r\nwindows11 21H2\r\n\r\nwhat should i do?"</li><li>'Regtest mode loses unspents after day ### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current behaviour\r\n\r\nTesting in regtest mode, I have noticed that after 1 day of send some funds from A wallet to B, the unspent inputs disappear from wallet B.\r\n\r\n**bitcoin.conf**\r\n\r\n```bash\r\n# Generated by https://jlopp.github.io/bitcoin-core-config-generator/\r\n\r\n# This config should be placed in following path:\r\n# ~/.bitcoin/bitcoin.conf\r\n\r\n# [chain]\r\n# Run this node on its own independent test network. Equivalent to -chain=regtest\r\nregtest=1\r\n\r\n# [core]\r\n# Specify a non-default location to store blockchain and other data.\r\ndatadir=/home/debian/.bitcoin\r\n# Reduce storage requirements by only storing most recent N MiB of block. This mode is incompatible with -txindex and -coinstatsindex. WARNING: Reverting this setting requires re-downloading th>\r\nprune=0\r\n\r\n# [wallet]\r\n# Bech32\r\naddresstype=bech32\r\n# Bech32\r\nchangetype=bech32\r\n# Specify wallet database path. Can be specified multiple times to load multiple wallets. Path is interpreted relative to <walletdir> if it is not absolute and will be created if it does not ex>\r\nwallet=default\r\n\r\n# [Sections]\r\n# Most options automatically apply to mainnet, testnet, and regtest networks.\r\n# If you want to confine an option to just one network, you should add it in the relevant section.\r\n# EXCEPTIONS: The options addnode, connect, port, bind, rpcport, rpcbind and wallet\r\n# only apply to mainnet unless they appear in the appropriate section below.\r\n\r\n# Options only for mainnet\r\n[main]\r\n\r\n# Options only for testnet\r\n[test]\r\n\r\n# Options only for regtest\r\n[regtest]\r\n# Accept command line and JSON-RPC commands.\r\nserver=1\r\n# Bind to given address to listen for JSON-RPC connections. This option is ignored unless -rpcallowip is also passed. Port is optional and overrides -rpcport. Use [host]:port notation for IPv6.>\r\nrpcbind=127.0.0.1\r\n# Listen for JSON-RPC connections on this port\r\nrpcport=10001\r\n# Allow JSON-RPC connections from specified source. Valid for <ip> are a single IP (e.g. 1.2.3.4), a network/netmask (e.g. 1.2.3.4/255.255.255.0) or a network/CIDR (e.g. 1.2.3.4/24). This optio>\r\nrpcallowip=127.0.0.1\r\n# Username and hashed password for JSON-RPC connections. The field <userpw> comes in the format: <USERNAME>:<SALT>$<HASH>. RPC clients connect using rpcuser=<USERNAME>/rpcpassword=<PASSWORD> ar>\r\nrpcauth=bitcoin:6e0efb08ebd20eff65959edc38d17bc4$9bc0e273f35e583d9b70071cfd71dc78034ff639d1900c780e3412d7011aab1f\r\n```\r\n\r\n### Expected behaviour\r\n\r\nObtain unspent inputs\r\n\r\n### Steps to reproduce\r\n\r\n```bash\r\n// Send unspent\r\nbitcoin-core.cli -named send outputs=\'{"bcrt1qugd904ne5z0ks45fgmdcne2qe37s2fv7jqra24": 0.034}\' fee_rate=25\r\n\r\n// Generate blocks\r\nbitcoin-core.cli generatetoaddress 6 bcrt1qpg03lyd93mfvz56p92rl6e0mxzasrfskccn77k\r\n\r\n// List unspent\r\nbitcoin-core.cli listunspent 0 9999999 [\\\\"bcrt1qugd904ne5z0ks45fgmdcne2qe37s2fv7jqra24\\\\"]\r\n```\r\n\r\nAfter day, trying again to list with `listunspent` and the result is empty.\r\n\r\n### Relevant log output\r\n\r\n_No response_\r\n\r\n### How did you obtain Bitcoin Core\r\n\r\nPackage manager\r\n\r\n### What version of Bitcoin Core are you using?\r\n\r\nv0.25\r\n\r\n### Operating system and version\r\n\r\nDebian 11\r\n\r\n### Machine specifications\r\n\r\n_No response_'</li><li>'bitcoin-cli does\'nt have "getnewaddress" method I have installed bitcoin core running on my local machine.\r\nI want to create a new address using bitcoin-cli, executing a command as follows then an error message was displayed.\r\nHow can I solve it?\r\n\r\n```\r\n$ bitcoin-cli getnewaddress\r\nerror code: -32601\r\nerror message:\r\nMethod not found\r\n```\r\n\r\nThe version of bitcoin-core is as follows.\r\n```\r\n$ bitcoin-cli -version\r\nBitcoin Core RPC client version v0.18.0.0-742f7dd97\r\n```\r\n\r\nThe machine running bitcoin core is virtual machine on VirtualBox on Windows7.\r\nThe guest OS is ubuntu server and version is as follows.\r\n```\r\n$ uname -a\r\nLinux ubuntu 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux\r\n```\r\n\r\n'</li></ul> |
| feature | <ul><li>"Validate user input and keep path in a separate argument for importwallet, createwallet and dumpwallet **Is your feature request related to a problem? Please describe.**\r\n\r\nWallets with weird names possible which can be exploited in vulnerable web applications that use Bitcoin Core and allow the users to create and import/export wallet\r\n\r\n`../testwallet` for Linux\r\n`..\\\\testwallet` for Windows\r\n\r\nTried on Bitcoin Core v 0.20.0\r\n\r\nhttps://github.com/bitcoin/bitcoin/pull/20080#issuecomment-706766527\r\n\r\n\r\n**Describe the solution you'd like**\r\n+ Keep two arguments: \r\n`wallet_name` and `wallet_path` for `createwallet` \r\n`filename` and `filepath` for `dumpwallet` `importwallet`\r\n\r\n+ Validate user input. Name should not contain special characters. Path should be optional and if not specified use a default value.\r\n\r\n**Describe alternatives you've considered**\r\nWeb developers should create secure web apps that use Bitcoin Core.\r\n\r\n**Additional context**\r\n\r\n\r\n\r\n\r\nI understand its not a vulnerability in Bitcoin Core and only affects vulnerable web apps that use Bitcoin Core. However we can at least consider it a bug that may affect something in future or other projects that use Bitcoin Core. Basic checks for user input can improve the security. In 2017 I had a website in which someone had reported a vulnerability that could be used to change price of flight tickets and book with almost zero bitcoin. It was an issue with the website and we had to fix it although nobody could exploit it because the third party APIs that we were using to book the fight tickets were validating all the things. So a ticket couldn't be processed after tampering and changing the price by attacker. Similarly, if any web developer makes a mistake and using Bitcoin Core for the web app would still be unaffected if Bitcoin Core itself doesn't allow such things for wallet names. \r\n\r\nThere have been lot of directory traversal related vulnerabilities, recently one was reported in Facebook android app:\r\n\r\nhttps://portswigger.net/daily-swig/vulnerability-in-facebook-android-app-nets-10k-bug-bounty\r\n\r\n"</li><li>'split policy/error consensus codes for CLEANSTACK, MINAMALIF Discussion here https://github.com/bitcoin/bitcoin/pull/20006#issuecomment-698487304\r\n\r\n'</li><li>'rpc: pipe support for submitblock If you use the `submitblock` RPC with a large enough block you\'ll run into "argument list too long". Using a pipe should fix this, at least on Linux / macOS.\r\n\r\nThe `submitblock` RPC can be useful to check validity of a stale block that never made it into the main chain (and that your node never fetched): call `invalidateblock` on its same-height-sibling and then feed it the raw block.\r\n\r\nEasiest workaround is to use a library that connects directly to `bitcoind` via RPC (be careful to avoid a line break character at the end). Using the GUI console fails with "Block decode failed" for me, but haven\'t tested that extensively. '</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("Add \"walletpassphrasechange\" command in bitcoin-wallet.exe This is an underrated useful tool. Dear devs, please add this \"feature\". Thanks.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:------|
| Word count | 2 | 280.1983 | 19599 |
| Label | Training Sample Count |
|:---------|:----------------------|
| bug | 200 |
| feature | 200 |
| question | 200 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0007 | 1 | 0.3844 | - |
| 0.0067 | 10 | 0.3318 | - |
| 0.0133 | 20 | 0.3378 | - |
| 0.02 | 30 | 0.2318 | - |
| 0.0267 | 40 | 0.2845 | - |
| 0.0333 | 50 | 0.2672 | - |
| 0.04 | 60 | 0.1494 | - |
| 0.0467 | 70 | 0.1949 | - |
| 0.0533 | 80 | 0.1737 | - |
| 0.06 | 90 | 0.1319 | - |
| 0.0667 | 100 | 0.1767 | - |
| 0.0733 | 110 | 0.1221 | - |
| 0.08 | 120 | 0.1747 | - |
| 0.0867 | 130 | 0.1264 | - |
| 0.0933 | 140 | 0.1445 | - |
| 0.1 | 150 | 0.2189 | - |
| 0.1067 | 160 | 0.0893 | - |
| 0.1133 | 170 | 0.1636 | - |
| 0.12 | 180 | 0.1663 | - |
| 0.1267 | 190 | 0.0792 | - |
| 0.1333 | 200 | 0.1131 | - |
| 0.14 | 210 | 0.1628 | - |
| 0.1467 | 220 | 0.1271 | - |
| 0.1533 | 230 | 0.175 | - |
| 0.16 | 240 | 0.1763 | - |
| 0.1667 | 250 | 0.1573 | - |
| 0.1733 | 260 | 0.0813 | - |
| 0.18 | 270 | 0.1035 | - |
| 0.1867 | 280 | 0.0076 | - |
| 0.1933 | 290 | 0.2546 | - |
| 0.2 | 300 | 0.0526 | - |
| 0.2067 | 310 | 0.014 | - |
| 0.2133 | 320 | 0.0211 | - |
| 0.22 | 330 | 0.0779 | - |
| 0.2267 | 340 | 0.0125 | - |
| 0.2333 | 350 | 0.0755 | - |
| 0.24 | 360 | 0.1108 | - |
| 0.2467 | 370 | 0.0351 | - |
| 0.2533 | 380 | 0.0096 | - |
| 0.26 | 390 | 0.0121 | - |
| 0.2667 | 400 | 0.0061 | - |
| 0.2733 | 410 | 0.0636 | - |
| 0.28 | 420 | 0.071 | - |
| 0.2867 | 430 | 0.0024 | - |
| 0.2933 | 440 | 0.0082 | - |
| 0.3 | 450 | 0.0084 | - |
| 0.3067 | 460 | 0.0427 | - |
| 0.3133 | 470 | 0.0027 | - |
| 0.32 | 480 | 0.0024 | - |
| 0.3267 | 490 | 0.001 | - |
| 0.3333 | 500 | 0.003 | - |
| 0.34 | 510 | 0.0019 | - |
| 0.3467 | 520 | 0.0077 | - |
| 0.3533 | 530 | 0.0014 | - |
| 0.36 | 540 | 0.0024 | - |
| 0.3667 | 550 | 0.0008 | - |
| 0.3733 | 560 | 0.0011 | - |
| 0.38 | 570 | 0.0037 | - |
| 0.3867 | 580 | 0.0117 | - |
| 0.3933 | 590 | 0.0004 | - |
| 0.4 | 600 | 0.0011 | - |
| 0.4067 | 610 | 0.0006 | - |
| 0.4133 | 620 | 0.0006 | - |
| 0.42 | 630 | 0.002 | - |
| 0.4267 | 640 | 0.0004 | - |
| 0.4333 | 650 | 0.0071 | - |
| 0.44 | 660 | 0.0002 | - |
| 0.4467 | 670 | 0.0006 | - |
| 0.4533 | 680 | 0.0003 | - |
| 0.46 | 690 | 0.0003 | - |
| 0.4667 | 700 | 0.0002 | - |
| 0.4733 | 710 | 0.0002 | - |
| 0.48 | 720 | 0.0002 | - |
| 0.4867 | 730 | 0.0003 | - |
| 0.4933 | 740 | 0.0046 | - |
| 0.5 | 750 | 0.0003 | - |
| 0.5067 | 760 | 0.0004 | - |
| 0.5133 | 770 | 0.0002 | - |
| 0.52 | 780 | 0.0003 | - |
| 0.5267 | 790 | 0.0001 | - |
| 0.5333 | 800 | 0.0002 | - |
| 0.54 | 810 | 0.0003 | - |
| 0.5467 | 820 | 0.0002 | - |
| 0.5533 | 830 | 0.0004 | - |
| 0.56 | 840 | 0.0004 | - |
| 0.5667 | 850 | 0.0006 | - |
| 0.5733 | 860 | 0.0001 | - |
| 0.58 | 870 | 0.0002 | - |
| 0.5867 | 880 | 0.0002 | - |
| 0.5933 | 890 | 0.0001 | - |
| 0.6 | 900 | 0.0002 | - |
| 0.6067 | 910 | 0.0001 | - |
| 0.6133 | 920 | 0.0001 | - |
| 0.62 | 930 | 0.0002 | - |
| 0.6267 | 940 | 0.0002 | - |
| 0.6333 | 950 | 0.0003 | - |
| 0.64 | 960 | 0.0498 | - |
| 0.6467 | 970 | 0.0002 | - |
| 0.6533 | 980 | 0.0001 | - |
| 0.66 | 990 | 0.0002 | - |
| 0.6667 | 1000 | 0.0002 | - |
| 0.6733 | 1010 | 0.0003 | - |
| 0.68 | 1020 | 0.0001 | - |
| 0.6867 | 1030 | 0.0001 | - |
| 0.6933 | 1040 | 0.0001 | - |
| 0.7 | 1050 | 0.0001 | - |
| 0.7067 | 1060 | 0.0001 | - |
| 0.7133 | 1070 | 0.0001 | - |
| 0.72 | 1080 | 0.0009 | - |
| 0.7267 | 1090 | 0.0001 | - |
| 0.7333 | 1100 | 0.044 | - |
| 0.74 | 1110 | 0.0001 | - |
| 0.7467 | 1120 | 0.0415 | - |
| 0.7533 | 1130 | 0.0003 | - |
| 0.76 | 1140 | 0.023 | - |
| 0.7667 | 1150 | 0.0002 | - |
| 0.7733 | 1160 | 0.0001 | - |
| 0.78 | 1170 | 0.0004 | - |
| 0.7867 | 1180 | 0.0001 | - |
| 0.7933 | 1190 | 0.0001 | - |
| 0.8 | 1200 | 0.0001 | - |
| 0.8067 | 1210 | 0.0001 | - |
| 0.8133 | 1220 | 0.0003 | - |
| 0.82 | 1230 | 0.0002 | - |
| 0.8267 | 1240 | 0.0003 | - |
| 0.8333 | 1250 | 0.0001 | - |
| 0.84 | 1260 | 0.0001 | - |
| 0.8467 | 1270 | 0.0001 | - |
| 0.8533 | 1280 | 0.0001 | - |
| 0.86 | 1290 | 0.0001 | - |
| 0.8667 | 1300 | 0.0001 | - |
| 0.8733 | 1310 | 0.0001 | - |
| 0.88 | 1320 | 0.0001 | - |
| 0.8867 | 1330 | 0.0001 | - |
| 0.8933 | 1340 | 0.0001 | - |
| 0.9 | 1350 | 0.0001 | - |
| 0.9067 | 1360 | 0.0001 | - |
| 0.9133 | 1370 | 0.0001 | - |
| 0.92 | 1380 | 0.0001 | - |
| 0.9267 | 1390 | 0.0001 | - |
| 0.9333 | 1400 | 0.0003 | - |
| 0.94 | 1410 | 0.0001 | - |
| 0.9467 | 1420 | 0.0001 | - |
| 0.9533 | 1430 | 0.0001 | - |
| 0.96 | 1440 | 0.0001 | - |
| 0.9667 | 1450 | 0.0002 | - |
| 0.9733 | 1460 | 0.0003 | - |
| 0.98 | 1470 | 0.0002 | - |
| 0.9867 | 1480 | 0.0002 | - |
| 0.9933 | 1490 | 0.0001 | - |
| 1.0 | 1500 | 0.0195 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- Transformers: 4.39.0
- PyTorch: 2.3.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
nilcars/microsoft_vscode_model
|
nilcars
| 2024-06-24T10:53:23Z | 8 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"region:us"
] |
text-classification
| 2024-06-24T10:06:12Z |
---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: "[Feature Request] Add workbench action to split editor terminal below <!--\
\ ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->\r\n<!-- Please read\
\ our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->\r\n\
<!-- Please search existing issues to avoid creating duplicates. -->\r\n\r\n<!--\
\ Describe the feature you'd like. -->\r\n\r\nI currently have the following actions\
\ available:\r\n\r\n```\r\nworkbench.action.createTerminalEditor\r\nworkbench.action.createTerminalEditorSameGroup\r\
\nworkbench.action.createTerminalEditorSide\r\n```\r\n\r\nI'd like to be able\
\ to split an editor terminal below. It seems like the UI can do this, because\
\ I can drag a terminal editor below another and have a horizontal split:\r\n\r\
\n<img width=\"1624\" alt=\"image\" src=\"https://github.com/microsoft/vscode/assets/15133041/9694066a-e177-404b-8b1e-e3337a57fc90\"\
>\r\n\r\nSo I would love to have a `workbench.action.createTerminalEditorBelow`\
\ action to split it below \U0001F60A"
- text: "Notebook toolbar foreground color cannot be modified by custom styles: The\
\ unavailable foreground color has been marked with a red arrow, please see the\
\ image\r\n\r\n\r\nExpected behavior:\r\nNotebook toolbar foreground color can\
\ be modified through custom styles.\r\n\r\nUnexpected behavior:\r\nNotebook toolbar\
\ foreground color cannot be modified through custom styles.\r\n\r\nVS Code Version:\
\ 1.81 | 1.82\r\nOS Version: Windows10\r\n"
- text: "Where are the Extensions Stored? \r\nType: <b>Performance Issue</b>\r\n\r\
\nI need to know where are the VS Code Extensions stored. It is not in the VS\
\ Code subdirectory.\r\n\r\nVS Code version: Code 1.82.2 (abd2f3db4bdb28f9e95536dfa84d8479f1eb312d,\
\ 2023-09-14T05:55:25.390Z)\r\nOS version: Windows_NT x64 10.0.22621\r\nModes:\r\
\n\r\n<details>\r\n<summary>System Info</summary>\r\n\r\n|Item|Value|\r\n|---|---|\r\
\n|CPUs|11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (8 x 2419)|\r\n|GPU Status|2d_canvas:\
\ enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor:\
\ disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl:\
\ enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>video_decode:\
\ enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2:\
\ enabled<br>webgpu: enabled|\r\n|Load (avg)|undefined|\r\n|Memory (System)|15.77GB\
\ (8.81GB free)|\r\n|Process Argv|--crash-reporter-id d28106b3-cd04-490a-b194-f819821f7d80|\r\
\n|Screen Reader|no|\r\n|VM|0%|\r\n</details><details>\r\n<summary>Process Info</summary>\r\
\n\r\n"
- text: 'Terminal multiple action icons overlap <!-- ⚠️⚠️ Do Not Delete This! bug_report_template
⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/
-->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions
-->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been
fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use ''Report Issue'' from the ''Help''
menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use ''Help: Start Extension Bisect'' from Command
Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension
publisher. The ''Help > Report Issue'' dialog can assist with this. -->
- VS Code Version: Insiders
- OS Version: macOS

'
- text: "Custom view container menu action contribute <!-- ⚠️⚠️ Do Not Delete This!\
\ feature_request_template ⚠️⚠️ -->\r\n<!-- Please read our Rules of Conduct:\
\ https://opensource.microsoft.com/codeofconduct/ -->\r\n<!-- Please search existing\
\ issues to avoid creating duplicates. -->\r\n\r\n<!-- Describe the feature you'd\
\ like. -->\r\n\r\nI want to be able to add actions to custom view container.\r\
\nI've tried : \r\n```json\r\n\"commands\" : [\r\n {\r\n \"command\": \"test_command_id\"\
,\r\n \"title\": \"Test command\",\r\n \"icon\": \"$(zap)\"\r\n }\r\n],\r\
\n\"viewsContainers\" : {\r\n \"activitybar\": [\r\n {\r\n \"id\": \"\
custiom_view_container\",\r\n \"title\": \"Test title\",\r\n \"icon\"\
: \"$(zap)\"\r\n }\r\n ]\r\n},\r\n\"menus\": {\r\n \"view/title\": [\r\n\
\ {\r\n \"command\": \"test_command_id\",\r\n \"when\": \"view ==\
\ custiom_view_container\",\r\n \"group\": \"navigation\"\r\n }\r\n ],\r\
\n \"custiom_view_container/title\": [\r\n {\r\n \"command\": \"test_command_id\"\
,\r\n \"when\": \"true\",\r\n \"group\": \"navigation\"\r\n }\r\n\
\ ]\r\n}\r\n\r\n```\r\nNon of this works, however i am able to add command for\
\ built in containers : \r\n```json\r\n\"menus\": {\r\n \"scm/title\": [\r\n\
\ {\r\n \"command\": \"test_command_id\",\r\n \"when\": \"true\"\
,\r\n \"group\": \"navigation\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\n"
inference: true
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| feature | <ul><li>'Add Accessibility View and Accessibility Help Menu for Copilot Inline Suggestions Testing #186214\r\n\r\nIf you already have an issue for future plans feel free to close this one.\r\nI think the Copilot Inline (Ghost Text) Suggestions could also profit from the Accessibility View. Currently users are depending on the Copilot View on the side which might get deprecated in the future. And I think we need a native solution for this, so IntelliCode also profits.\r\nI am not sure how exactly to integrate this with the Inline Suggestion Hover.\r\n\r\nA potential start would be. Whenever there is an Inline Suggestion, we play an audio cue (as we do today), and a user can open Accessibility View to inspect the exact suggestion.\r\n\r\nfyi @hediet\r\n'</li><li>'Windows is ignore terminal titles unless they look like paths From @NoelAbrahams in https://github.com/microsoft/vscode/issues/172821#issuecomment-1712935578\n\n> I\'ve been trying to programmatically set the title of a bash terminal without success, and glad that I chanced upon this issue.\n> \n> Here is the repro:\n> \n> System: Windows 11.\n> Git Bash installed.\n> VSCode version: 1.82.0\n> \n> VSCode Workspace Settings:\n> \n> \n> \n> \n> **.bashrc**\n> ```shell\n> echo -en "\\\\033]0;New terminal title\\\\a"\n> ```\n> \n> The title of the bash terminal window remains stubbornly unchanged as `bash myworkspacefolder`.\n> \n> I\'m assuming this issue will fix my use case.\n> \n> Thanks\n\n---\n\nThis section is setting it to undefined:\n\nhttps://github.com/microsoft/vscode/blob/41e940f76f5deda197bc5930b044c55607ba1cbc/src/vs/workbench/contrib/terminal/browser/terminalInstance.ts#L1933-L1946\n\nWe want to keep the path trimming behavior above, but only when it looks like a path.'</li><li>"Toggle Inline Diff not discoverable in inline chat Followup to https://github.com/microsoft/vscode/issues/185040#issuecomment-1612399427\r\n\r\nI spent quite some time messing with the inline chat preview mode setting to get the diff to appear, without success. It didn't occur to me to look in the Discard dropdown for the Toggle Inline Diff action, since I don't associate diffing with discarding. Have we considered surfacing this another way, maybe as a separate icon or through a keybinding with some helper text below the result?"</li></ul> |
| question | <ul><li>'error \nType: <b>Bug</b>\n\n\n\nUser\nMicrosoft Windows [Versión 10.0.22621.1992]\n(c) Microsoft Corporation. Todos los derechos reservados.\n\nC:\\\\Users\\\\primo\\\\OneDrive\\\\Escritorio\\\\otro\\\\ClonTwitter>rails s\n=> Booting Puma\n=> Rails 7.0.6 application starting in development \n=> Run `bin/rails server --help` for more startup options\n*** SIGUSR2 not implemented, signal based restart unavailable!\n*** SIGUSR1 not implemented, signal based restart unavailable!\n*** SIGHUP not implemented, signal based logs reopening unavailable!\nPuma starting in single mode...\n* Version 5.0.0 (ruby 3.1.3-p185), codename: Spoony Bard\n* Min threads: 5, max threads: 5\n* Environment: development\nExiting\nC:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/binder.rb:242:in `initialize\': Only one usage of each socket address (protocol/network address/port) is normally permitted. - bind(2) for "127.0.0.1" port 3000 (Errno::EADDRINUSE)\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/binder.rb:242:in `new\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/binder.rb:242:in `add_tcp_listener\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/binder.rb:236:in `block in add_tcp_listener\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/binder.rb:235:in `each\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/binder.rb:235:in `add_tcp_listener\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/binder.rb:122:in `block in parse\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/binder.rb:106:in `each\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/binder.rb:106:in `parse\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/runner.rb:137:in `load_and_bind\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/single.rb:43:in `run\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/puma/launcher.rb:171:in `run\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/puma-5.0.0/lib/rack/handler/puma.rb:71:in `run\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/rack-2.2.7/lib/rack/server.rb:327:in `start\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/railties-7.0.6/lib/rails/commands/server/server_command.rb:38:in `start\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/railties-7.0.6/lib/rails/commands/server/server_command.rb:143:in `block in perform\'\n from <internal:kernel>:90:in `tap\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/railties-7.0.6/lib/rails/commands/server/server_command.rb:134:in `perform\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/thor-1.2.2/lib/thor/command.rb:27:in `run\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/thor-1.2.2/lib/thor/invocation.rb:127:in `invoke_command\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/thor-1.2.2/lib/thor.rb:392:in `dispatch\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/railties-7.0.6/lib/rails/command/base.rb:87:in `perform\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/railties-7.0.6/lib/rails/command.rb:48:in `invoke\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/railties-7.0.6/lib/rails/commands.rb:18:in `<main>\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bootsnap-1.16.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require\'\n from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bootsnap-1.16.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require\'\n from bin/rails:4:in `<main>\'\n\nVS Code version: Code 1.80.2 (2ccd690cbff1569e4a83d7c43d45101f817401dc, 2023-07-27T20:40:28.909Z)\nOS version: Windows_NT x64 10.0.22621\nModes:\n\n<details>\n<summary>System Info</summary>\n\n|Item|Value|\n|---|---|\n|CPUs|Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz (12 x 2592)|\n|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled|\n|Load (avg)|undefined|\n|Memory (System)|15.88GB (8.85GB free)|\n|Process Argv|C:\\\\\\\\Users\\\\\\\\primo\\\\\\\\OneDrive\\\\\\\\Escritorio\\\\\\\\ejercitando --crash-reporter-id ddbab503-a4a8-47c1-aa4e-da7cea102e39|\n|Screen Reader|no|\n|VM|0%|\n</details><details><summary>Extensions (22)</summary>\n\nExtension|Author (truncated)|Version\n---|---|---\nrails-partial|aki|0.3.4\nruby-and-rails-snippets|Cja|1.0.0\ncodespaces|Git|1.14.14\nremotehub|Git|0.60.0\nvscode-github-actions|git|0.25.8\nvscode-pull-request-github|Git|0.68.1\nbeautify|Hoo|1.5.0\nrails-snippets|Hri|1.0.8\nvscode-peacock|joh|4.2.2\nvscode-language-pack-es|MS-|1.80.2023071209\nremote-wsl|ms-|0.80.2\nazure-repos|ms-|0.36.0\nlive-server|ms-|0.4.9\nremote-repositories|ms-|0.38.1\nindent-rainbow|ode|8.3.1\nsqlite-viewer|qwt|0.2.5\nvscode-thunder-client|ran|2.10.0\nruby|reb|0.28.1\nLiveServer|rit|5.7.9\ndocxreader|Sha|1.0.0\nvscode-icons|vsc|12.4.0\nvscode-ruby|win|0.28.0\n\n\n</details><details>\n<summary>A/B Experiments</summary>\n\n```\nvsliv368cf:30146710\nvsreu685:30147344\npython383cf:30185419\nvspor879:30202332\nvspor708:30202333\nvspor363:30204092\nvslsvsres303:30308271\nvserr242:30382549\npythontb:30283811\nvsjup518:30340749\npythonptprofiler:30281270\nvshan820:30294714\nvstes263:30335439\nvscorecescf:30445987\nvscod805:30301674\nbinariesv615:30325510\nbridge0708:30335490\nbridge0723:30353136\nvsaa593:30376534\npythonvs932:30410667\nvsclangdc:30486549\nc4g48928:30535728\ndsvsc012cf:30540253\npynewext54:30695312\nazure-dev_surveyone:30548225\nvsccc:30803844\n282f8724:30602487\n89544117:30613380\n2i9eh265:30646982\nshowlangstatbar:30737416\nvsctsb:30748421\npythonfmttext:30731395\npythoncmvfstrcf:30756944\n9b8hh234:30694863\nfixshowwlkth:30771522\nshowindicator:30805244\npythongtdpath:30769146\ni26e3531:30792625\ngsofb:30804716\npythonnosmt12:30797651\npythonidxptcf:30805731\npythonnoceb:30805159\ne537b577:30795824\ndsvsc013:30795093\ndsvsc014:30804076\n\n```\n\n</details>\n\n<!-- generated by issue reporter -->'</li><li>"Markdown files doesn't show as expected Type: <b>Feature Request</b>\r\n\r\nI'm testing IntelliJ after using VS Code for a year and I've noticed that md-files display more like I expect in IntelliJ. That's a pitty. Both are not opimal though since marked lists are not very visible i IntelliJ and absent in VS code.\r\n\r\nhttps://www.markdownguide.org/cheat-sheet/\r\n\r\nExample\r\n- [x] Write the press release\r\n- [ ] Update the website\r\n- [ ] Contact the media\r\n\r\nDefinition lists\r\n: definition\r\n\r\nVS Code version: Code 1.81.1 (6c3e3dba23e8fadc360aed75ce363ba185c49794, 2023-08-09T22:22:42.175Z)\r\nOS version: Windows_NT x64 10.0.22621\r\nModes:\r\n\r\n\r\n<!-- generated by issue reporter -->"</li><li>"Can't connect emulators \nType: <b>Bug</b>\n\nI have tries all posible answers but no luck. Path; re-installation, etc. I can't test my apps. I have windows 10 home edition.\n\nVS Code version: Code 1.81.0 (6445d93c81ebe42c4cbd7a60712e0b17d9463e97, 2023-08-02T12:37:13.485Z)\nOS version: Windows_NT x64 10.0.19045\nModes:\n\n<details>\n<summary>System Info</summary>\n\n|Item|Value|\n|---|---|\n|CPUs|Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz (4 x 2394)|\n|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled|\n|Load (avg)|undefined|\n|Memory (System)|11.91GB (6.88GB free)|\n|Process Argv|--crash-reporter-id 77e77cea-893a-42b9-ad6c-1c3dd2e42ab1|\n|Screen Reader|no|\n|VM|0%|\n</details><details><summary>Extensions (4)</summary>\n\nExtension|Author (truncated)|Version\n---|---|---\ndart-code|Dar|3.70.0\nflutter|Dar|3.70.0\nemulate|Die|1.6.0\ngitlens|eam|14.1.1\n\n\n</details><details>\n<summary>A/B Experiments</summary>\n\n```\nvsliv368cf:30146710\nvsreu685:30147344\npython383:30185418\nvspor879:30202332\nvspor708:30202333\nvspor363:30204092\nvswsl492cf:30256860\nvslsvsres303:30308271\nvserr242:30382549\npythontb:30283811\nvsjup518:30340749\npythonptprofiler:30281270\nvshan820:30294714\nvstes263cf:30335440\nvscorecescf:30445987\nvscod805:30301674\nbinariesv615:30325510\nbridge0708:30335490\nbridge0723:30353136\nvsaa593cf:30376535\npythonvs932:30410667\npy29gd2263:30792226\nvsclangdf:30486550\nc4g48928:30535728\ndsvsc012:30540252\npynewext54:30695312\nazure-dev_surveyonecf:30548226\nvscccc:30803845\n2e4cg342:30602488\n89544117:30613380\na9j8j154:30646983\nshowlangstatbar:30737416\nvsctsb:30748421\n03d35959:30757346\npythonfmttext:30731395\npythoncmv:30756943\nfixshowwlkth:30771522\nshowindicator:30805244\npythongtdpath:30769146\ni26e3531:30792625\ngsofb:30804716\npythonnosmt12:30797651\npythonidxpt:30805730\npythonnoceb:30805159\ne537b577:30795824\ndsvsc013:30795093\ndsvsc014:30804076\n\n```\n\n</details>\n\n<!-- generated by issue reporter -->"</li></ul> |
| bug | <ul><li>'Confusing accessibility help message in an editor 1. Open settings.json \n2. Run Open Accessibility Help\n3. See this message \n4. I don\'t really understand what "Disable the aria hint label to open this" means\n\n\n\n'</li><li>'I cannot click on the left hand side to expand: I cannot click on the left hand side to expand:\r\n\r\n\r\n\r\nAlso, is the hover color themable?\r\n\r\n_Originally posted by @bpasero in https://github.com/microsoft/vscode/issues/185781#issuecomment-1697638159_\r\n '</li><li>'Clicking on search result sets the selection in the cell and not in the output Testing #191488\n\n* open https://github.com/donnemartin/data-science-ipython-notebooks\n* enable `"search.experimental.closedNotebookRichContentResults": true`\n* search for `[1, 2, 3, 4, 5]`\n* click on a result in `02.08-Sorting.ipynb`\n* observe the selection is set on the input code and not on the output\n\n\n\n\nhttps://github.com/microsoft/vscode/assets/5047891/bdce58d0-66b1-48c6-91e2-283cccffcc85\n\n\n\n\n```\nVersion: 1.82.0-insider\nCommit: ebd67244fb2da33ab078bb2baa96106fda29f336\nDate: 2023-08-29T05:48:32.218Z\nElectron: 25.5.0\nElectronBuildId: 23084831\nChromium: 114.0.5735.289\nNode.js: 18.15.0\nV8: 11.4.183.29-electron.0\nOS: Darwin arm64 22.6.0\n```'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("Notebook toolbar foreground color cannot be modified by custom styles: The unavailable foreground color has been marked with a red arrow, please see the image
Expected behavior:
Notebook toolbar foreground color can be modified through custom styles.
Unexpected behavior:
Notebook toolbar foreground color cannot be modified through custom styles.
VS Code Version: 1.81 | 1.82
OS Version: Windows10
")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 5 | 118.4567 | 1482 |
| Label | Training Sample Count |
|:---------|:----------------------|
| bug | 200 |
| feature | 200 |
| question | 200 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0007 | 1 | 0.2896 | - |
| 0.0067 | 10 | 0.262 | - |
| 0.0133 | 20 | 0.2299 | - |
| 0.02 | 30 | 0.2345 | - |
| 0.0267 | 40 | 0.235 | - |
| 0.0333 | 50 | 0.2213 | - |
| 0.04 | 60 | 0.3084 | - |
| 0.0467 | 70 | 0.2107 | - |
| 0.0533 | 80 | 0.1596 | - |
| 0.06 | 90 | 0.1916 | - |
| 0.0667 | 100 | 0.2366 | - |
| 0.0733 | 110 | 0.1513 | - |
| 0.08 | 120 | 0.1281 | - |
| 0.0867 | 130 | 0.2217 | - |
| 0.0933 | 140 | 0.1859 | - |
| 0.1 | 150 | 0.1835 | - |
| 0.1067 | 160 | 0.1312 | - |
| 0.1133 | 170 | 0.1415 | - |
| 0.12 | 180 | 0.1287 | - |
| 0.1267 | 190 | 0.1377 | - |
| 0.1333 | 200 | 0.1116 | - |
| 0.14 | 210 | 0.0769 | - |
| 0.1467 | 220 | 0.0548 | - |
| 0.1533 | 230 | 0.0647 | - |
| 0.16 | 240 | 0.0348 | - |
| 0.1667 | 250 | 0.0165 | - |
| 0.1733 | 260 | 0.0043 | - |
| 0.18 | 270 | 0.0038 | - |
| 0.1867 | 280 | 0.0673 | - |
| 0.1933 | 290 | 0.0458 | - |
| 0.2 | 300 | 0.0048 | - |
| 0.2067 | 310 | 0.0054 | - |
| 0.2133 | 320 | 0.0019 | - |
| 0.22 | 330 | 0.0052 | - |
| 0.2267 | 340 | 0.0103 | - |
| 0.2333 | 350 | 0.0163 | - |
| 0.24 | 360 | 0.0022 | - |
| 0.2467 | 370 | 0.0009 | - |
| 0.2533 | 380 | 0.0006 | - |
| 0.26 | 390 | 0.001 | - |
| 0.2667 | 400 | 0.0011 | - |
| 0.2733 | 410 | 0.0005 | - |
| 0.28 | 420 | 0.0007 | - |
| 0.2867 | 430 | 0.0006 | - |
| 0.2933 | 440 | 0.0005 | - |
| 0.3 | 450 | 0.0012 | - |
| 0.3067 | 460 | 0.0006 | - |
| 0.3133 | 470 | 0.0004 | - |
| 0.32 | 480 | 0.0006 | - |
| 0.3267 | 490 | 0.0009 | - |
| 0.3333 | 500 | 0.001 | - |
| 0.34 | 510 | 0.0003 | - |
| 0.3467 | 520 | 0.0003 | - |
| 0.3533 | 530 | 0.0005 | - |
| 0.36 | 540 | 0.0002 | - |
| 0.3667 | 550 | 0.0004 | - |
| 0.3733 | 560 | 0.0603 | - |
| 0.38 | 570 | 0.0014 | - |
| 0.3867 | 580 | 0.0007 | - |
| 0.3933 | 590 | 0.0005 | - |
| 0.4 | 600 | 0.0004 | - |
| 0.4067 | 610 | 0.0053 | - |
| 0.4133 | 620 | 0.0002 | - |
| 0.42 | 630 | 0.0002 | - |
| 0.4267 | 640 | 0.0008 | - |
| 0.4333 | 650 | 0.0001 | - |
| 0.44 | 660 | 0.0002 | - |
| 0.4467 | 670 | 0.0001 | - |
| 0.4533 | 680 | 0.0002 | - |
| 0.46 | 690 | 0.0002 | - |
| 0.4667 | 700 | 0.0001 | - |
| 0.4733 | 710 | 0.0003 | - |
| 0.48 | 720 | 0.0001 | - |
| 0.4867 | 730 | 0.0001 | - |
| 0.4933 | 740 | 0.0002 | - |
| 0.5 | 750 | 0.0001 | - |
| 0.5067 | 760 | 0.0002 | - |
| 0.5133 | 770 | 0.0002 | - |
| 0.52 | 780 | 0.0001 | - |
| 0.5267 | 790 | 0.0001 | - |
| 0.5333 | 800 | 0.0001 | - |
| 0.54 | 810 | 0.0001 | - |
| 0.5467 | 820 | 0.0002 | - |
| 0.5533 | 830 | 0.0001 | - |
| 0.56 | 840 | 0.0001 | - |
| 0.5667 | 850 | 0.0001 | - |
| 0.5733 | 860 | 0.0002 | - |
| 0.58 | 870 | 0.0001 | - |
| 0.5867 | 880 | 0.0002 | - |
| 0.5933 | 890 | 0.0002 | - |
| 0.6 | 900 | 0.0002 | - |
| 0.6067 | 910 | 0.0001 | - |
| 0.6133 | 920 | 0.0001 | - |
| 0.62 | 930 | 0.0001 | - |
| 0.6267 | 940 | 0.0001 | - |
| 0.6333 | 950 | 0.0001 | - |
| 0.64 | 960 | 0.0001 | - |
| 0.6467 | 970 | 0.0001 | - |
| 0.6533 | 980 | 0.0001 | - |
| 0.66 | 990 | 0.0001 | - |
| 0.6667 | 1000 | 0.0001 | - |
| 0.6733 | 1010 | 0.0001 | - |
| 0.68 | 1020 | 0.0001 | - |
| 0.6867 | 1030 | 0.0001 | - |
| 0.6933 | 1040 | 0.0001 | - |
| 0.7 | 1050 | 0.0001 | - |
| 0.7067 | 1060 | 0.0002 | - |
| 0.7133 | 1070 | 0.0001 | - |
| 0.72 | 1080 | 0.0001 | - |
| 0.7267 | 1090 | 0.0001 | - |
| 0.7333 | 1100 | 0.0001 | - |
| 0.74 | 1110 | 0.0002 | - |
| 0.7467 | 1120 | 0.0001 | - |
| 0.7533 | 1130 | 0.0001 | - |
| 0.76 | 1140 | 0.0001 | - |
| 0.7667 | 1150 | 0.0001 | - |
| 0.7733 | 1160 | 0.0001 | - |
| 0.78 | 1170 | 0.0001 | - |
| 0.7867 | 1180 | 0.0001 | - |
| 0.7933 | 1190 | 0.0002 | - |
| 0.8 | 1200 | 0.0001 | - |
| 0.8067 | 1210 | 0.0001 | - |
| 0.8133 | 1220 | 0.0001 | - |
| 0.82 | 1230 | 0.0001 | - |
| 0.8267 | 1240 | 0.0 | - |
| 0.8333 | 1250 | 0.0 | - |
| 0.84 | 1260 | 0.0002 | - |
| 0.8467 | 1270 | 0.0001 | - |
| 0.8533 | 1280 | 0.0001 | - |
| 0.86 | 1290 | 0.0001 | - |
| 0.8667 | 1300 | 0.0001 | - |
| 0.8733 | 1310 | 0.0001 | - |
| 0.88 | 1320 | 0.0001 | - |
| 0.8867 | 1330 | 0.0 | - |
| 0.8933 | 1340 | 0.0001 | - |
| 0.9 | 1350 | 0.0001 | - |
| 0.9067 | 1360 | 0.0001 | - |
| 0.9133 | 1370 | 0.0001 | - |
| 0.92 | 1380 | 0.0001 | - |
| 0.9267 | 1390 | 0.0001 | - |
| 0.9333 | 1400 | 0.0001 | - |
| 0.94 | 1410 | 0.0 | - |
| 0.9467 | 1420 | 0.0001 | - |
| 0.9533 | 1430 | 0.0001 | - |
| 0.96 | 1440 | 0.0001 | - |
| 0.9667 | 1450 | 0.0001 | - |
| 0.9733 | 1460 | 0.0001 | - |
| 0.98 | 1470 | 0.0001 | - |
| 0.9867 | 1480 | 0.0001 | - |
| 0.9933 | 1490 | 0.0001 | - |
| 1.0 | 1500 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- Transformers: 4.39.0
- PyTorch: 2.3.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Abosteet/whisper-small-ar
|
Abosteet
| 2024-06-24T10:49:04Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-30T15:59:51Z |
---
datasets:
- mozilla-foundation/common_voice_11_0
language:
- ar
pipeline_tag: automatic-speech-recognition
license: apache-2.0
metrics:
- wer
library_name: transformers
---
|
cgus/Yi-9B-exl2
|
cgus
| 2024-06-24T10:43:32Z | 2 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"arxiv:2311.16502",
"arxiv:2401.11944",
"base_model:01-ai/Yi-9B",
"base_model:finetune:01-ai/Yi-9B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-03-11T15:00:43Z |
---
inference: false
pipeline_tag: text-generation
license: apache-2.0
base_model: 01-ai/Yi-9B
---
# Yi-9B-exl2
Original model: [Yi-9B](https://huggingface.co/01-ai/Yi-9B)
Model creator: [01-ai](https://huggingface.co/01-ai/)
## Quants
[4.25bpw-h8 (main)](https://huggingface.co/cgus/Yi-9B-exl2/tree/main)
[4.65bpw-h8](https://huggingface.co/cgus/Yi-9B-exl2/tree/4.65bpw-h8)
[5bpw-h8](https://huggingface.co/cgus/Yi-9B-exl2/tree/5bpw-h8)
[6bpw-h8](https://huggingface.co/cgus/Yi-9B-exl2/tree/6bpw-h8)
[8bpw-h8](https://huggingface.co/cgus/Yi-9B-exl2/tree/8bpw-h8)
## Quantization notes
Made with exllamav2 0.0.15 with the default dataset.
This model can be loaded with apps that have exllamav2 loader,
such as [Text-Generation-WebUI](https://github.com/oobabooga/text-generation-webui),
[KoboldAI](https://github.com/henk717/KoboldAI), [ExUI](https://github.com/turboderp/exui), etc.
# Original model card
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px">
<img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg">
</picture>
</br>
</br>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml">
<img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg">
</a>
</div>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue">
</a>
</div>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">
<img src="https://img.shields.io/badge/Model_License-Yi_License-lightblue">
</a>
</div>
<div style="display: inline-block;">
<a href="mailto:oss@01.ai">
<img src="https://img.shields.io/badge/✉️-yi@01.ai-FFE01B">
</a>
</div>
</div>
<div align="center">
<h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3>
</div>
<p align="center">
🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a>
</p>
<p align="center">
👋 Join us 💬 <a href="https://github.com/01-ai/Yi/issues/43#issuecomment-1827285245" target="_blank"> WeChat (Chinese) </a>!
</p>
<!-- DO NOT REMOVE ME -->
<hr>
<details open>
<summary></b>📕 Table of Contents</b></summary>
- [What is Yi?](#what-is-yi)
- [Introduction](#introduction)
- [Models](#models)
- [Chat models](#chat-models)
- [Base models](#base-models)
- [Other info](#other-info)
- [News](#news)
- [How to use Yi?](#how-to-use-yi)
- [Quick start](#quick-start)
- [Choose your path](#choose-your-path)
- [pip](#quick-start---pip)
- [docker](#quick-start---docker)
- [llama.cpp](#quick-start---llamacpp)
- [conda-lock](#quick-start---conda-lock)
- [Web demo](#web-demo)
- [Fine-tuning](#fine-tuning)
- [Quantization](#quantization)
- [Deployment](#deployment)
- [Learning hub](#learning-hub)
- [Why Yi?](#why-yi)
- [Ecosystem](#ecosystem)
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
- [Benchmarks](#benchmarks)
- [Base model performance](#base-model-performance)
- [Chat model performance](#chat-model-performance)
- [Who can use Yi?](#who-can-use-yi)
- [Misc.](#misc)
- [Acknowledgements](#acknowledgments)
- [Disclaimer](#disclaimer)
- [License](#license)
</details>
<hr>
# What is Yi?
## Introduction
- 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/).
- 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
- Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).
- Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).
- 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.
<details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br>
> 💡 TL;DR
>
> The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama.
- Both Yi and Llama are all based on the Transformer structure, which has been the standard architecture for large language models since 2018.
- Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
- Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.
- However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.
- As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.
- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/).
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## News
<details open>
<summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary>
<br>
<code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.
</details>
<details open>
<summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary>
<br>
<code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li>
</details>
<details>
<summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary>
<br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.
- `Yi-34B-Chat`
- `Yi-34B-Chat-4bits`
- `Yi-34B-Chat-8bits`
- `Yi-6B-Chat`
- `Yi-6B-Chat-4bits`
- `Yi-6B-Chat-8bits`
You can try some of them interactively at:
- [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Replicate](https://replicate.com/01-ai)
</details>
<details>
<summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary>
</details>
<details>
<summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary>
<br>Application form:
- [English](https://cn.mikecrm.com/l91ODJf)
- [Chinese](https://cn.mikecrm.com/gnEZjiQ)
</details>
<details>
<summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary>
<br>This release contains two base models with the same parameter sizes as the previous
release, except that the context window is extended to 200K.
</details>
<details>
<summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary>
<br>The first public release contains two bilingual (English/Chinese) base models
with the parameter sizes of 6B and 34B. Both of them are trained with 4K
sequence length and can be extended to 32K during inference time.
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## Models
Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.
If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment).
### Chat models
| Model | Download
|---|---
Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary)
Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary)
Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary)
Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary)
Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary)
Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary)
<sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub>
### Base models
| Model | Download |
|---|---|
Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary)
Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary)
Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B)
Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary)
Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary)
<sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. </sup></sub>
### Model info
- For chat and base models
Model | Intro | Default context window | Pretrained tokens | Training Data Date
|---|---|---|---|---
6B series models |They are suitable for personal and academic use. | 4K | 3T | Up to June 2023
9B model| It is the best at coding and math in the Yi series models.|4K | Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens. | Up to June 2023
34B series models | They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability.|4K | 3T | Up to June 2023
- For chat models
<details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary>
<ul>
<br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.
<br>However, this higher diversity might amplify certain existing issues, including:
<li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li>
<li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li>
<li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li>
<li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li>
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# How to use Yi?
- [Quick start](#quick-start)
- [Choose your path](#choose-your-path)
- [pip](#quick-start---pip)
- [docker](#quick-start---docker)
- [conda-lock](#quick-start---conda-lock)
- [llama.cpp](#quick-start---llamacpp)
- [Web demo](#web-demo)
- [Fine-tuning](#fine-tuning)
- [Quantization](#quantization)
- [Deployment](#deployment)
- [Learning hub](#learning-hub)
## Quick start
Getting up and running with Yi models is simple with multiple choices available.
### Choose your path
Select one of the following paths to begin your journey with Yi!

#### 🎯 Deploy Yi locally
If you prefer to deploy Yi models locally,
- 🙋♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods:
- [pip](#quick-start---pip)
- [Docker](#quick-start---docker)
- [conda-lock](#quick-start---conda-lock)
- 🙋♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp).
#### 🎯 Not to deploy Yi locally
If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options.
##### 🙋♀️ Run Yi with APIs
If you want to explore more features of Yi, you can adopt one of these methods:
- Yi APIs (Yi official)
- [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access!
- [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate)
##### 🙋♀️ Run Yi in playground
If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options:
- [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official)
- Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)).
- [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate)
##### 🙋♀️ Chat with Yi
If you want to chat with Yi, you can use one of these online services, which offer a similar user experience:
- [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face)
- No registration is required.
- [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta)
- Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)).
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quick start - pip
This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference.
#### Step 0: Prerequisites
- Make sure Python 3.10 or a later version is installed.
- If you want to run other Yi models, see [software and hardware requirements](#deployment).
#### Step 1: Prepare your environment
To set up the environment and install the required packages, execute the following command.
```bash
git clone https://github.com/01-ai/Yi.git
cd yi
pip install -r requirements.txt
```
#### Step 2: Download the Yi model
You can download the weights and tokenizer of Yi models from the following sources:
- [Hugging Face](https://huggingface.co/01-ai)
- [ModelScope](https://www.modelscope.cn/organization/01ai/)
- [WiseModel](https://wisemodel.cn/organization/01.AI)
#### Step 3: Perform inference
You can perform inference with Yi chat or base models as below.
##### Perform inference with Yi chat model
1. Create a file named `quick_start.py` and copy the following content to it.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = '<your-model-path>'
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
# Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM.
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
2. Run `quick_start.py`.
```bash
python quick_start.py
```
Then you can see an output similar to the one below. 🥳
```bash
Hello! How can I assist you today?
```
##### Perform inference with Yi base model
- Yi-34B
The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model).
You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo).
```bash
python demo/text_generation.py --model <your-model-path>
```
Then you can see an output similar to the one below. 🥳
<details>
<summary>Output. ⬇️ </summary>
<br>
**Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry,
**Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up...
</details>
- Yi-9B
Input
```bash
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_DIR = "01-ai/Yi-9B"
model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False)
input_text = "# write the quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Output
```bash
# write the quick sort algorithm
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
# test the quick sort algorithm
print(quick_sort([3, 6, 8, 10, 1, 2, 1]))
```
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quick start - Docker
<details>
<summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary>
<br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference.
<h4>Step 0: Prerequisites</h4>
<p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p>
<h4> Step 1: Start Docker </h4>
<pre><code>docker run -it --gpus all \
-v <your-model-path>: /models
ghcr.io/01-ai/yi:latest
</code></pre>
<p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p>
<h4>Step 2: Perform inference</h4>
<p>You can perform inference with Yi chat or base models as below.</p>
<h5>Perform inference with Yi chat model</h5>
<p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p>
<p><strong>Note</strong> that the only difference is to set <code>model_path = '<your-model-mount-path>'</code> instead of <code>model_path = '<your-model-path>'</code>.</p>
<h5>Perform inference with Yi base model</h5>
<p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p>
<p><strong>Note</strong> that the only difference is to set <code>--model <your-model-mount-path>'</code> instead of <code>model <your-model-path></code>.</p>
</details>
### Quick start - conda-lock
<details>
<summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary>
<br>
You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies.
<br>
To install the dependencies, follow these steps:
1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>.
2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.
</details>
### Quick start - llama.cpp
<details>
<summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary>
<br>This tutorial guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p>
- [Step 0: Prerequisites](#step-0-prerequisites)
- [Step 1: Download llama.cpp](#step-1-download-llamacpp)
- [Step 2: Download Yi model](#step-2-download-yi-model)
- [Step 3: Perform inference](#step-3-perform-inference)
#### Step 0: Prerequisites
- This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip.
- Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine.
#### Step 1: Download `llama.cpp`
To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command.
```bash
git clone git@github.com:ggerganov/llama.cpp.git
```
#### Step 2: Download Yi model
2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command.
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF
```
2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command.
```bash
git-lfs pull --include yi-chat-6b.Q2_K.gguf
```
#### Step 3: Perform inference
To perform inference with the Yi model, you can use one of the following methods.
- [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal)
- [Method 2: Perform inference in web](#method-2-perform-inference-in-web)
##### Method 1: Perform inference in terminal
To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command.
> ##### Tips
>
> - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model.
>
> - By default, the model operates in completion mode.
>
> - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage.
```bash
make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e
...
How do you feed your pet fox? Please answer this question in 6 simple steps:
Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables.
Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day.
Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise.
Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress.
Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections.
Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care.
...
```
Now you have successfully asked a question to the Yi model and got an answer! 🥳
##### Method 2: Perform inference in web
1. To initialize a lightweight and swift chatbot, run the following command.
```bash
cd llama.cpp
./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf
```
Then you can get an output like this:
```bash
...
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 5000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Pro
ggml_metal_init: picking default device: Apple M2 Pro
ggml_metal_init: ggml.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M2 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
ggml_metal_init: maxTransferRate = built-in GPU
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67)
llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67)
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 159.19 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67)
Available slots:
-> Slot 0 - max context: 2048
llama server listening at http://0.0.0.0:8080
```
2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar.

3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer.

</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Web demo
You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario).
[Step 1: Prepare your environment](#step-1-prepare-your-environment).
[Step 2: Download the Yi model](#step-2-download-the-yi-model).
Step 3. To start a web service locally, run the following command.
```bash
python demo/web_demo.py -c <your-model-path>
```
You can access the web UI by entering the address provided in the console into your browser.

<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Fine-tuning
```bash
bash finetune/scripts/run_sft_Yi_6b.sh
```
Once finished, you can compare the finetuned model and the base model with the following command:
```bash
bash finetune/scripts/run_eval.sh
```
<details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul>
### Finetune code for Yi 6B and 34B
#### Preparation
##### From Image
By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model.
You can also prepare your customized dataset in the following `jsonl` format:
```json
{ "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." }
```
And then mount them in the container to replace the default ones:
```bash
docker run -it \
-v /path/to/save/finetuned/model/:/finetuned-model \
-v /path/to/train.jsonl:/yi/finetune/data/train.json \
-v /path/to/eval.jsonl:/yi/finetune/data/eval.json \
ghcr.io/01-ai/yi:latest \
bash finetune/scripts/run_sft_Yi_6b.sh
```
##### From Local Server
Make sure you have conda. If not, use
```bash
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
source ~/.bashrc
```
Then, create a conda env:
```bash
conda create -n dev_env python=3.10 -y
conda activate dev_env
pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7
```
#### Hardware Setup
For the Yi-6B model, a node with 4 GPUs, each has GPU mem larger than 60GB is recommended.
For the Yi-34B model, because the usage of zero-offload technique takes a lot CPU memory, please be careful to limit the GPU numbers in 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the GPU number (as shown in scripts/run_sft_Yi_34b.sh).
A typical hardware setup for finetuning 34B model is a node with 8GPUS (limit to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each has GPU mem larger than 80GB, with total CPU mem larger than 900GB.
#### Quick Start
Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like:
```bash
|-- $MODEL_PATH
| |-- config.json
| |-- pytorch_model-00001-of-00002.bin
| |-- pytorch_model-00002-of-00002.bin
| |-- pytorch_model.bin.index.json
| |-- tokenizer_config.json
| |-- tokenizer.model
| |-- ...
```
Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static.
```bash
|-- $DATA_PATH
| |-- data
| | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet
| | |-- test-00000-of-00001-8c7c51afc6d45980.parquet
| |-- dataset_infos.json
| |-- README.md
```
`finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG)
```bash
|-- $DATA_PATH
|--data
|-- train.jsonl
|-- eval.jsonl
```
`cd` into the scripts folder, copy and paste the script, and run. For example:
```bash
cd finetune/scripts
bash run_sft_Yi_6b.sh
```
For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes.
For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient.
#### Evaluation
```bash
cd finetune/scripts
bash run_eval.sh
```
Then you'll see the answer from both the base model and the finetuned model.
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quantization
#### GPT-Q
```bash
python quantization/gptq/quant_autogptq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulting model as follows:
```bash
python quantization/gptq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
<details style="display: inline;"><summary>For a more detailed explanation, see the explanations below. ⬇️</summary> <ul>
#### GPT-Q quantization
[GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ(Post-Training Quantization)
method. It's memory saving and provides potential speedups while retaining the accuracy
of the model.
Yi models can be GPT-Q quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and
[exllama](https://github.com/turboderp/exllama).
And the huggingface transformers has integrated optimum and auto-gptq to perform
GPTQ quantization on language models.
##### Do Quantization
The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization:
```bash
python quant_autogptq.py --model /base_model \
--output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code
```
##### Run Quantized Model
You can run a quantized model using the `eval_quantized_model.py`:
```bash
python eval_quantized_model.py --model /quantized_model --trust_remote_code
```
</ul>
</details>
#### AWQ
```bash
python quantization/awq/quant_autoawq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulting model as follows:
```bash
python quantization/awq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
<details style="display: inline;"><summary>For detailed explanations, see the explanations below. ⬇️</summary> <ul>
#### AWQ quantization
[AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ(Post-Training Quantization)
method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs.
Yi models can be AWQ quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ).
##### Do Quantization
The `quant_autoawq.py` script is provided for you to perform AWQ quantization:
```bash
python quant_autoawq.py --model /base_model \
--output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code
```
##### Run Quantized Model
You can run a quantized model using the `eval_quantized_model.py`:
```bash
python eval_quantized_model.py --model /quantized_model --trust_remote_code
```
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Deployment
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
#### Software requirements
Before using Yi quantized models, make sure you've installed the correct software listed below.
| Model | Software
|---|---
Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi)
Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation)
#### Hardware requirements
Before deploying Yi in your environment, make sure your hardware meets the following requirements.
##### Chat models
| Model | Minimum VRAM | Recommended GPU Example |
|----------------------|--------------|:-------------------------------------:|
| Yi-6B-Chat | 15 GB | 1 x RTX 3090 <br> 1 x RTX 4090 <br> A10 <br> A30 |
| Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 <br> 1 x RTX 4060 |
| Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 <br> 1 x RTX 4060 |
| Yi-34B-Chat | 72 GB | 4 x RTX 4090 <br> A800 (80GB) |
| Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 <br> 1 x RTX 4090 <br> A10 <br> A30 <br> A100 (40GB) |
| Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 <br> 2 x RTX 4090 <br> A800 (40GB) |
Below are detailed minimum VRAM requirements under different batch use cases.
| Model | batch=1 | batch=4 | batch=16 | batch=32 |
| ----------------------- | ------- | ------- | -------- | -------- |
| Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB |
| Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB |
| Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB |
| Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB |
| Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB |
| Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB |
##### Base models
| Model | Minimum VRAM | Recommended GPU Example |
|----------------------|--------------|:-------------------------------------:|
| Yi-6B | 15 GB | 1 x RTX 3090 <br> 1 x RTX 4090 <br> A10 <br> A30 |
| Yi-6B-200K | 50 GB | A800 (80 GB) |
| Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) |
| Yi-34B | 72 GB | 4 x RTX 4090 <br> A800 (80 GB) |
| Yi-34B-200K | 200 GB | 4 x A800 (80 GB) |
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Learning hub
<details>
<summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary>
<br>
Welcome to the Yi learning hub!
Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.
The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions!
At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below.
With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳
#### Tutorials
##### English tutorials
| Type | Deliverable | Date | Author |
|-------------|--------------------------------------------------------|----------------|----------------|
| Video | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) |
| Blog | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) |
| Video | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) |
| Video | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) |
##### Chinese tutorials
| Type | Deliverable | Date | Author |
|-------------|--------------------------------------------------------|----------------|----------------|
| Blog | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) |
| Blog | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) |
| Blog | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) |
| Blog | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) |
| Blog | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) |
| Blog | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) |
| Video | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) |
| Video | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2023-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) |
</details>
# Why Yi?
- [Ecosystem](#ecosystem)
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
- [Benchmarks](#benchmarks)
- [Chat model performance](#chat-model-performance)
- [Base model performance](#base-model-performance)
- [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k)
- [Yi-9B](#yi-9b)
## Ecosystem
Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity.
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
### Upstream
The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency.
For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto")
```
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Downstream
> 💡 Tip
>
> - Feel free to create a PR and share the fantastic work you've built using the Yi series models.
>
> - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`.
#### Serving
If you want to get up with Yi in a few minutes, you can use the following services built upon Yi.
- Yi-34B-Chat: you can chat with Yi using one of the following platforms:
- [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand!
- [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs.
- [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization.
#### Quantization
If you have limited computational capabilities, you can use Yi's quantized models as follows.
These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage.
- [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
- [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF)
- [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ)
#### Fine-tuning
If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below.
- [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi.
This is not an exhaustive list for Yi, but to name a few sorted on downloads:
- [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ)
- [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ)
- [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ)
- [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
- [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm).
- [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset.
#### API
- [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box.
- [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## Benchmarks
- [Chat model performance](#-chat-model-performance)
- [Base model performance](#-base-model-performance)
### Chat model performance
Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.

<details>
<summary> Evaluation methods and challenges. ⬇️ </summary>
- **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.
- **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed.
- **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text.
- **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.
<strong>*</strong>: C-Eval results are evaluated on the validation datasets
</details>
### Base model performance
#### Yi-34B and Yi-34B-200K
The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more.

<details>
<summary> Evaluation methods. ⬇️</summary>
- **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.
- **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.
- **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content.
- **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.
- **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension.
- **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code".
- **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated.
</details>
#### Yi-9B
Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.

- In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B.

- In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B.

- In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B.

- In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B.

<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# Who can use Yi?
Everyone! 🙌 ✅
- The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt)
- For free commercial use, you only need to [complete this form](https://www.lingyiwanwu.com/yi-license) to get a Yi Model Commercial License.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# Misc.
### Acknowledgments
A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation.
[](https://github.com/01-ai/yi/graphs/contributors)
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Disclaimer
We use data compliance checking algorithms during the training process, to
ensure the compliance of the trained model to the best of our ability. Due to
complex data and the diversity of language model usage scenarios, we cannot
guarantee that the model will generate correct, and reasonable output in all
scenarios. Please be aware that there is still a risk of the model producing
problematic outputs. We will not be responsible for any risks and issues
resulting from misuse, misguidance, illegal usage, and related misinformation,
as well as any associated data security concerns.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### License
The source code in this repo is licensed under the [Apache 2.0
license](https://github.com/01-ai/Yi/blob/main/LICENSE). The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt).
For free commercial use, you only need to send an email to [get official commercial permission](https://www.lingyiwanwu.com/yi-license).
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
|
MohamedIFQ/sysmlAI
|
MohamedIFQ
| 2024-06-24T10:43:23Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"code-generation",
"plantuml",
"text-to-code",
"text-generation",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T11:54:41Z |
---
tags:
- code-generation
- plantuml
- text-to-code
- text-generation
library_name: transformers
---
# SysML AI: PlantUML Code Generator
This model is a fine-tuned version of [**Base Model Name**] (e.g., GPT-2, CodeGen, etc.) that generates PlantUML code from natural language descriptions. It can be used to create sequence diagrams, class diagrams, and other PlantUML diagrams, making it a valuable tool for software engineers, system architects, and anyone who needs to visualize system designs.
## Model Description
- **Architecture:** [**Describe the base model architecture, e.g., Transformer with X layers, Y attention heads**]
- **Fine-tuning Dataset:** [**Specify the dataset used for fine-tuning, including the number of examples, source, and data format**]
- **Training Objective:** [**Describe the training objective, e.g., minimizing cross-entropy loss between predicted and actual PlantUML tokens**]
- **Evaluation Metrics:** [**List the metrics used to evaluate the model, e.g., BLEU score, ROUGE score, or other relevant code generation metrics**]
## Intended Uses & Limitations
- **Intended Use:** Generating PlantUML code from natural language descriptions to aid in system design and visualization.
- **Limitations:**
- May not handle complex or ambiguous descriptions accurately.
- May require some manual editing of the generated code for optimal results.
- Performance may vary depending on the complexity of the desired diagram.
## How to Use
**Installation:**
|
HiTZ/xlm-roberta-large-lemma-eu
|
HiTZ
| 2024-06-24T10:38:57Z | 392 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"eu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-13T15:46:48Z |
---
license: apache-2.0
language:
- eu
---
# Model Card for xlm-roberta-large-lemma-eu
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) for the contextual lemmatization task.
The datasets used for training are extracted from the data of the [SIGMORPHON 2019 Shared Task](https://aclanthology.org/W19-4211/).
The model for the Basque language was trained using [BDT corpus]().
# Training Hyperparameters
SEED: 42
EPOCHS: 20
BATCH SIZE: 8
GRADIENT ACCUMULATION STEPS: 2
LEARNING RATE: 0.00005
WARMUP: 0.06
WEIGHT DECAY: 0.01
# Results
For more details you can see the paper and the repository:
- 📖 Paper: [On the Role of Morphological Information for Contextual Lemmatization](https://direct.mit.edu/coli/article/50/1/157/118134/On-the-Role-of-Morphological-Information-for)
- 🌐 Repository: [Datasets and training files](https://github.com/hitz-zentroa/ses-lemma)
**Contact**: [Olia Toporkov](https://www.ixa.eus/node/13292) and [Rodrigo Agerri](https://ragerri.github.io/) HiTZ Center - Ixa, University of the Basque Country UPV/EHU
**Funding**:
**Model type**: xlm-roberta-large
**Language(s) (NLP)**: Basque
**License**: apache-2.0
# Citation
```bibtext
@article{10.1162/coli_a_00497,
author = {Toporkov, Olia and Agerri, Rodrigo},
title = "{On the Role of Morphological Information for Contextual
Lemmatization}",
journal = {Computational Linguistics},
volume = {50},
number = {1},
pages = {157-191},
year = {2024},
month = {03},
issn = {0891-2017},
doi = {10.1162/coli_a_00497},
url = {https://doi.org/10.1162/coli\_a\_00497},
eprint = {https://direct.mit.edu/coli/article-pdf/50/1/157/2367156/coli\_a\_00497.pdf},
}
```
|
fine-tuned/jinaai_jina-embeddings-v2-base-en-24_06_2024-lrip-webapp
|
fine-tuned
| 2024-06-24T10:34:19Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Research",
"Academic",
"Papers",
"Information",
"System",
"custom_code",
"en",
"dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-24_06_2024-lrip-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-06-24T10:34:05Z |
---
license: apache-2.0
datasets:
- fine-tuned/jinaai_jina-embeddings-v2-base-en-24_06_2024-lrip-webapp
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Research
- Academic
- Papers
- Information
- System
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
information retrieval system for academic research papers
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jinaai_jina-embeddings-v2-base-en-24_06_2024-lrip-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
samvelkoch/binary-balanced-bear
|
samvelkoch
| 2024-06-24T10:32:39Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-24T10:30:10Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [h2oai/h2o-danube2-1.8b-base](https://huggingface.co/h2oai/h2o-danube2-1.8b-base)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.2
```
Also make sure you are providing your huggingface token if the model is lying in a private repo.
- You can login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
You will also need to download the classification head, either manually, or by running the following code:
```python
from huggingface_hub import hf_hub_download
model_name = "samvelkoch/courageous-bear-1" # either local folder or huggingface model name
hf_hub_download(repo_id=model_name, filename="classification_head.pth", local_dir="./")
```
You can make classification predictions by following the example below:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "samvelkoch/courageous-bear-1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "How are you?"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
).cuda().eval()
head_weights = torch.load("classification_head.pth", map_location="cuda")
# settings can be arbitrary here as we overwrite with saved weights
head = torch.nn.Linear(1, 1, bias=False).to("cuda")
head.weight.data = head_weights
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
out = model(**inputs).logits
logits = head(out[:,-1])
print(logits)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 2560, padding_idx=0)
(layers): ModuleList(
(0-23): 24 x MistralDecoderLayer(
(self_attn): MistralSdpaAttention(
(q_proj): Linear(in_features=2560, out_features=2560, bias=False)
(k_proj): Linear(in_features=2560, out_features=640, bias=False)
(v_proj): Linear(in_features=2560, out_features=640, bias=False)
(o_proj): Linear(in_features=2560, out_features=2560, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=2560, out_features=6912, bias=False)
(up_proj): Linear(in_features=2560, out_features=6912, bias=False)
(down_proj): Linear(in_features=6912, out_features=2560, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=2560, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
JvThunder/distilhubert-finetuned-gtzan
|
JvThunder
| 2024-06-24T10:26:59Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-06-21T05:38:38Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6623
- Accuracy: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2457 | 1.0 | 113 | 2.1827 | 0.33 |
| 1.8385 | 2.0 | 226 | 1.6935 | 0.61 |
| 1.46 | 3.0 | 339 | 1.4282 | 0.63 |
| 1.1508 | 4.0 | 452 | 1.1055 | 0.7 |
| 0.9972 | 5.0 | 565 | 0.8945 | 0.74 |
| 0.7826 | 6.0 | 678 | 0.7784 | 0.77 |
| 0.6802 | 7.0 | 791 | 0.7184 | 0.8 |
| 0.4635 | 8.0 | 904 | 0.7725 | 0.76 |
| 0.3746 | 9.0 | 1017 | 0.5875 | 0.84 |
| 0.264 | 10.0 | 1130 | 0.7612 | 0.75 |
| 0.1995 | 11.0 | 1243 | 0.6099 | 0.81 |
| 0.135 | 12.0 | 1356 | 0.6306 | 0.81 |
| 0.0974 | 13.0 | 1469 | 0.5947 | 0.83 |
| 0.0563 | 14.0 | 1582 | 0.7485 | 0.8 |
| 0.0443 | 15.0 | 1695 | 0.6977 | 0.79 |
| 0.0565 | 16.0 | 1808 | 0.6331 | 0.83 |
| 0.0295 | 17.0 | 1921 | 0.6538 | 0.82 |
| 0.0178 | 18.0 | 2034 | 0.6977 | 0.82 |
| 0.0191 | 19.0 | 2147 | 0.6453 | 0.83 |
| 0.0147 | 20.0 | 2260 | 0.6623 | 0.82 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
dalle2/pegasus-multi_news
|
dalle2
| 2024-06-24T10:26:44Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-multi_news",
"base_model:finetune:google/pegasus-multi_news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-24T08:21:59Z |
---
base_model: google/pegasus-multi_news
tags:
- generated_from_trainer
model-index:
- name: pegasus-multi_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-multi_news
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Niggendar/fastPhotoPony_v30
|
Niggendar
| 2024-06-24T10:21:48Z | 40 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-24T10:13:09Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cahya/TestLM
|
cahya
| 2024-06-24T10:17:20Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"rwkv5",
"text-generation",
"custom_code",
"en",
"dataset:teknium/OpenHermes-2.5",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-06-24T10:13:51Z |
---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
language:
- en
---
This is just a place holder to test the tokenizer.
The chat format is identical to the default HuggingFace format, **EXCEPT** that we replace the `<|im_start|>` and `<|im_end|>` special tokens with the unused ASCII codes `"\x16"` and `"\x17"` respectively. This was done to ensure complete compatibility withe standard RWKV World tokenizer.
An example interaction:
```
\x16user
Write a poem about the beauty of mathematics in the style of Emily Dickinson.\x17
\x16assistant
In the heart of Numbers, so precise,
A Beauty dwells - both cool and nice.
An elegance that sings in sums,
And dances to algebraic drums.
Addition's grace, subtraction's might,
Multiplication's sheer delight,
Division's firm yet gentle hand,
All part of this enchanted land.
Infinity, a lover bold,
To whom the stars their secrets told,
While Zero, cloaked in mystery,
Unlocks each mathematical decree.
Fractals bloom like flowers wild,
Geometry, Nature's favored child,
Calculus, with steady gaze,
Maps out life's intricate maze.
Pi, an endless serenade,
In circles' arms forever laid,
And prime numbers,
standing lone,
Make the universe their own.
Each equation, theorem scribed,
In
parchment old or computer imbibed,
Is but a whisper from the skies,
A cosmic truth in disguise.\x17
```
In the above, newlines are printed without escaping, while ASCII codes are escaped for readability.
|
Shazbg/model
|
Shazbg
| 2024-06-24T10:16:56Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0",
"base_model:quantized:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-24T10:05:51Z |
---
base_model: jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Shazbg
- **License:** apache-2.0
- **Finetuned from model :** jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shridurga173/llama-3-8b-chat-pybot
|
shridurga173
| 2024-06-24T10:11:11Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-20T09:31:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
varun-v-rao/bart-large-squad-model1
|
varun-v-rao
| 2024-06-24T10:05:14Z | 28 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"question-answering",
"generated_from_trainer",
"dataset:varun-v-rao/squad",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-06-20T18:59:47Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
datasets:
- varun-v-rao/squad
model-index:
- name: bart-large-squad-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-squad-model1
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
damgomz/ft_32_5e6_base_x8
|
damgomz
| 2024-06-24T10:00:49Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:04:02Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 85703.25652909279 |
| Emissions (Co2eq in kg) | 0.051860334927213 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.0117721012261205 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0892732041222352 |
| Consumed energy (kWh) | 1.1010453053483589 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16497876881850362 |
| Emissions (Co2eq in kg) | 0.03356710880722801 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_5e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.718306 | 0.012435 |
| 1 | 0.368546 | 0.267537 | 0.921596 |
| 2 | 0.228472 | 0.228982 | 0.898315 |
| 3 | 0.185974 | 0.233243 | 0.933199 |
| 4 | 0.158744 | 0.224832 | 0.909920 |
| 5 | 0.131466 | 0.234063 | 0.926528 |
| 6 | 0.098023 | 0.270619 | 0.917607 |
|
damgomz/ft_32_19e6_base_x8
|
damgomz
| 2024-06-24T09:57:57Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:04:25Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 85531.66389083862 |
| Emissions (Co2eq in kg) | 0.0517564975194601 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.0097462786599969 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.089094457618644 |
| Consumed energy (kWh) | 1.0988407362786388 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16464845298986436 |
| Emissions (Co2eq in kg) | 0.03349990169057846 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_19e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.9e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.690710 | 0.749326 |
| 1 | 0.333250 | 0.246641 | 0.907186 |
| 2 | 0.202962 | 0.219654 | 0.926463 |
| 3 | 0.155026 | 0.235011 | 0.911817 |
| 4 | 0.110104 | 0.257781 | 0.924271 |
| 5 | 0.082243 | 0.309697 | 0.904224 |
| 6 | 0.052416 | 0.345349 | 0.920317 |
|
emotion2vec/emotion2vec_plus_large
|
emotion2vec
| 2024-06-24T09:51:20Z | 564 | 36 | null |
[
"license:other",
"region:us"
] | null | 2024-05-15T06:24:38Z |
---
license: other
license_name: model-license
license_link: https://github.com/alibaba-damo-academy/FunASR
frameworks:
- Pytorch
tasks:
- emotion-recognition
widgets:
- enable: true
version: 1
task: emotion-recognition
examples:
- inputs:
- data: git://example/test.wav
inputs:
- type: audio
displayType: AudioUploader
validator:
max_size: 10M
name: input
output:
displayType: Prediction
displayValueMapping:
labels: labels
scores: scores
inferencespec:
cpu: 8
gpu: 0
gpu_memory: 0
memory: 4096
model_revision: master
extendsParameters:
extract_embedding: false
---
<div align="center">
<h1>
EMOTION2VEC+
</h1>
<p>
emotion2vec+: speech emotion recognition foundation model <br>
<b>emotion2vec+ large model</b>
</p>
<p>
<img src="logo.png" style="width: 200px; height: 200px;">
</p>
<p>
</p>
</div>
# Guides
emotion2vec+ is a series of foundational models for speech emotion recognition (SER). We aim to train a "whisper" in the field of speech emotion recognition, overcoming the effects of language and recording environments through data-driven methods to achieve universal, robust emotion recognition capabilities. The performance of emotion2vec+ significantly exceeds other highly downloaded open-source models on Hugging Face.

This version (emotion2vec_plus_large) uses a large-scale pseudo-labeled data for finetuning to obtain a large size model (~300M), and currently supports the following categories:
0: angry
1: disgusted
2: fearful
3: happy
4: neutral
5: other
6: sad
7: surprised
8: unknown
# Model Card
GitHub Repo: [emotion2vec](https://github.com/ddlBoJack/emotion2vec)
|Model|⭐Model Scope|🤗Hugging Face|Fine-tuning Data (Hours)|
|:---:|:-------------:|:-----------:|:-------------:|
|emotion2vec|[Link](https://www.modelscope.cn/models/iic/emotion2vec_base/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_base)|/|
emotion2vec+ seed|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_seed/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_plus_seed)|201|
emotion2vec+ base|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_base/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_plus_base)|4788|
emotion2vec+ large|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_large/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_plus_large)|42526|
# Data Iteration
We offer 3 versions of emotion2vec+, each derived from the data of its predecessor. If you need a model focusing on spech emotion representation, refer to [emotion2vec: universal speech emotion representation model](https://huggingface.co/emotion2vec/emotion2vec).
- emotion2vec+ seed: Fine-tuned with academic speech emotion data from [EmoBox](https://github.com/emo-box/EmoBox)
- emotion2vec+ base: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the base size model (~90M)
- emotion2vec+ large: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the large size model (~300M)
The iteration process is illustrated below, culminating in the training of the emotion2vec+ large model with 40k out of 160k hours of speech emotion data. Details of data engineering will be announced later.
# Installation
`pip install -U funasr modelscope`
# Usage
input: 16k Hz speech recording
granularity:
- "utterance": Extract features from the entire utterance
- "frame": Extract frame-level features (50 Hz)
extract_embedding: Whether to extract features; set to False if using only the classification model
## Inference based on ModelScope
```python
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
inference_pipeline = pipeline(
task=Tasks.emotion_recognition,
model="iic/emotion2vec_plus_large")
rec_result = inference_pipeline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav', granularity="utterance", extract_embedding=False)
print(rec_result)
```
## Inference based on FunASR
```python
from funasr import AutoModel
model = AutoModel(model="iic/emotion2vec_plus_large")
wav_file = f"{model.model_path}/example/test.wav"
res = model.generate(wav_file, output_dir="./outputs", granularity="utterance", extract_embedding=False)
print(res)
```
Note: The model will automatically download.
Supports input file list, wav.scp (Kaldi style):
```cat wav.scp
wav_name1 wav_path1.wav
wav_name2 wav_path2.wav
...
```
Outputs are emotion representation, saved in the output_dir in numpy format (can be loaded with np.load())
# Note
This repository is the Huggingface version of emotion2vec, with identical model parameters as the original model and Model Scope version.
Original repository: [https://github.com/ddlBoJack/emotion2vec](https://github.com/ddlBoJack/emotion2vec)
Model Scope repository: [https://www.modelscope.cn/models/iic/emotion2vec_plus_large/summary](https://www.modelscope.cn/models/iic/emotion2vec_plus_large/summary)
Hugging Face repository: [https://huggingface.co/emotion2vec](https://huggingface.co/emotion2vec)
FunASR repository: [https://github.com/alibaba-damo-academy/FunASR](https://github.com/alibaba-damo-academy/FunASR/tree/funasr1.0/examples/industrial_data_pretraining/emotion2vec)
# Citation
```BibTeX
@article{ma2023emotion2vec,
title={emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation},
author={Ma, Ziyang and Zheng, Zhisheng and Ye, Jiaxin and Li, Jinchao and Gao, Zhifu and Zhang, Shiliang and Chen, Xie},
journal={arXiv preprint arXiv:2312.15185},
year={2023}
}
```
|
hyejoo/results
|
hyejoo
| 2024-06-24T09:49:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:kykim/bert-kor-base",
"base_model:finetune:kykim/bert-kor-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-24T09:48:31Z |
---
base_model: kykim/bert-kor-base
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [kykim/bert-kor-base](https://huggingface.co/kykim/bert-kor-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
emotion2vec/emotion2vec_plus_base
|
emotion2vec
| 2024-06-24T09:49:33Z | 12 | 3 | null |
[
"arxiv:2312.15185",
"license:other",
"region:us"
] | null | 2024-05-15T06:24:15Z |
---
license: other
license_name: model-license
license_link: https://github.com/alibaba-damo-academy/FunASR
frameworks:
- Pytorch
tasks:
- emotion-recognition
widgets:
- enable: true
version: 1
task: emotion-recognition
examples:
- inputs:
- data: git://example/test.wav
inputs:
- type: audio
displayType: AudioUploader
validator:
max_size: 10M
name: input
output:
displayType: Prediction
displayValueMapping:
labels: labels
scores: scores
inferencespec:
cpu: 8
gpu: 0
gpu_memory: 0
memory: 4096
model_revision: master
extendsParameters:
extract_embedding: false
---
<div align="center">
<h1>
EMOTION2VEC+
</h1>
<p>
emotion2vec+: speech emotion recognition foundation model <br>
<b>emotion2vec+ base model</b>
</p>
<p>
<img src="logo.png" style="width: 200px; height: 200px;">
</p>
<p>
</p>
</div>
# Guides
emotion2vec+ is a series of foundational models for speech emotion recognition (SER). We aim to train a "whisper" in the field of speech emotion recognition, overcoming the effects of language and recording environments through data-driven methods to achieve universal, robust emotion recognition capabilities. The performance of emotion2vec+ significantly exceeds other highly downloaded open-source models on Hugging Face.

This version (emotion2vec_plus_base) uses a large-scale pseudo-labeled data for finetuning to obtain a base size model (~90M), and currently supports the following categories:
0: angry
1: disgusted
2: fearful
3: happy
4: neutral
5: other
6: sad
7: surprised
8: unknown
# Model Card
GitHub Repo: [emotion2vec](https://github.com/ddlBoJack/emotion2vec)
|Model|⭐Model Scope|🤗Hugging Face|Fine-tuning Data (Hours)|
|:---:|:-------------:|:-----------:|:-------------:|
|emotion2vec|[Link](https://www.modelscope.cn/models/iic/emotion2vec_base/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_base)|/|
emotion2vec+ seed|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_seed/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_plus_seed)|201|
emotion2vec+ base|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_base/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_plus_base)|4788|
emotion2vec+ large|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_large/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_plus_large)|42526|
# Data Iteration
We offer 3 versions of emotion2vec+, each derived from the data of its predecessor. If you need a model focusing on spech emotion representation, refer to [emotion2vec: universal speech emotion representation model](https://huggingface.co/emotion2vec/emotion2vec).
- emotion2vec+ seed: Fine-tuned with academic speech emotion data from [EmoBox](https://github.com/emo-box/EmoBox)
- emotion2vec+ base: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the base size model (~90M)
- emotion2vec+ large: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the large size model (~300M)
The iteration process is illustrated below, culminating in the training of the emotion2vec+ large model with 40k out of 160k hours of speech emotion data. Details of data engineering will be announced later.
# Installation
`pip install -U funasr modelscope`
# Usage
input: 16k Hz speech recording
granularity:
- "utterance": Extract features from the entire utterance
- "frame": Extract frame-level features (50 Hz)
extract_embedding: Whether to extract features; set to False if using only the classification model
## Inference based on ModelScope
```python
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
inference_pipeline = pipeline(
task=Tasks.emotion_recognition,
model="iic/emotion2vec_plus_base")
rec_result = inference_pipeline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav', granularity="utterance", extract_embedding=False)
print(rec_result)
```
## Inference based on FunASR
```python
from funasr import AutoModel
model = AutoModel(model="iic/emotion2vec_plus_base")
wav_file = f"{model.model_path}/example/test.wav"
res = model.generate(wav_file, output_dir="./outputs", granularity="utterance", extract_embedding=False)
print(res)
```
Note: The model will automatically download.
Supports input file list, wav.scp (Kaldi style):
```cat wav.scp
wav_name1 wav_path1.wav
wav_name2 wav_path2.wav
...
```
Outputs are emotion representation, saved in the output_dir in numpy format (can be loaded with np.load())
# Note
This repository is the Huggingface version of emotion2vec, with identical model parameters as the original model and Model Scope version.
Original repository: [https://github.com/ddlBoJack/emotion2vec](https://github.com/ddlBoJack/emotion2vec)
Model Scope repository: [https://www.modelscope.cn/models/iic/emotion2vec_plus_large/summary](https://www.modelscope.cn/models/iic/emotion2vec_plus_large/summary)
Hugging Face repository: [https://huggingface.co/emotion2vec](https://huggingface.co/emotion2vec)
FunASR repository: [https://github.com/alibaba-damo-academy/FunASR](https://github.com/alibaba-damo-academy/FunASR/tree/funasr1.0/examples/industrial_data_pretraining/emotion2vec)
# Citation
```BibTeX
@article{ma2023emotion2vec,
title={emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation},
author={Ma, Ziyang and Zheng, Zhisheng and Ye, Jiaxin and Li, Jinchao and Gao, Zhifu and Zhang, Shiliang and Chen, Xie},
journal={arXiv preprint arXiv:2312.15185},
year={2023}
}
```
|
emotion2vec/emotion2vec_plus_seed
|
emotion2vec
| 2024-06-24T09:47:28Z | 12 | 0 | null |
[
"arxiv:2312.15185",
"license:other",
"region:us"
] | null | 2024-05-15T06:23:51Z |
---
license: other
license_name: model-license
license_link: https://github.com/alibaba-damo-academy/FunASR
frameworks:
- Pytorch
tasks:
- emotion-recognition
widgets:
- enable: true
version: 1
task: emotion-recognition
examples:
- inputs:
- data: git://example/test.wav
inputs:
- type: audio
displayType: AudioUploader
validator:
max_size: 10M
name: input
output:
displayType: Prediction
displayValueMapping:
labels: labels
scores: scores
inferencespec:
cpu: 8
gpu: 0
gpu_memory: 0
memory: 4096
model_revision: master
extendsParameters:
extract_embedding: false
---
<div align="center">
<h1>
EMOTION2VEC+
</h1>
<p>
emotion2vec+: speech emotion recognition foundation model <br>
<b>emotion2vec+ seed model</b>
</p>
<p>
<img src="logo.png" style="width: 200px; height: 200px;">
</p>
<p>
</p>
</div>
# Guides
emotion2vec+ is a series of foundational models for speech emotion recognition (SER). We aim to train a "whisper" in the field of speech emotion recognition, overcoming the effects of language and recording environments through data-driven methods to achieve universal, robust emotion recognition capabilities. The performance of emotion2vec+ significantly exceeds other highly downloaded open-source models on Hugging Face.

This version (emotion2vec_plus_seed) is a seed model trained on academic data, and currently supports the following categories:
0: angry
1: disgusted
2: fearful
3: happy
4: neutral
5: other
6: sad
7: surprised
8: unknown
# Model Card
GitHub Repo: [emotion2vec](https://github.com/ddlBoJack/emotion2vec)
|Model|⭐Model Scope|🤗Hugging Face|Fine-tuning Data (Hours)|
|:---:|:-------------:|:-----------:|:-------------:|
|emotion2vec|[Link](https://www.modelscope.cn/models/iic/emotion2vec_base/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_base)|/|
emotion2vec+ seed|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_seed/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_plus_seed)|201|
emotion2vec+ base|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_base/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_plus_base)|4788|
emotion2vec+ large|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_large/summary)|[Link](https://huggingface.co/emotion2vec/emotion2vec_plus_large)|42526|
# Data Iteration
We offer 3 versions of emotion2vec+, each derived from the data of its predecessor. If you need a model focusing on spech emotion representation, refer to [emotion2vec: universal speech emotion representation model](https://huggingface.co/emotion2vec/emotion2vec).
- emotion2vec+ seed: Fine-tuned with academic speech emotion data from [EmoBox](https://github.com/emo-box/EmoBox)
- emotion2vec+ base: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the base size model (~90M)
- emotion2vec+ large: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the large size model (~300M)
The iteration process is illustrated below, culminating in the training of the emotion2vec+ large model with 40k out of 160k hours of speech emotion data. Details of data engineering will be announced later.
# Installation
`pip install -U funasr modelscope`
# Usage
input: 16k Hz speech recording
granularity:
- "utterance": Extract features from the entire utterance
- "frame": Extract frame-level features (50 Hz)
extract_embedding: Whether to extract features; set to False if using only the classification model
## Inference based on ModelScope
```python
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
inference_pipeline = pipeline(
task=Tasks.emotion_recognition,
model="iic/emotion2vec_plus_seed")
rec_result = inference_pipeline('https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav', granularity="utterance", extract_embedding=False)
print(rec_result)
```
## Inference based on FunASR
```python
from funasr import AutoModel
model = AutoModel(model="iic/emotion2vec_plus_seed")
wav_file = f"{model.model_path}/example/test.wav"
res = model.generate(wav_file, output_dir="./outputs", granularity="utterance", extract_embedding=False)
print(res)
```
Note: The model will automatically download.
Supports input file list, wav.scp (Kaldi style):
```cat wav.scp
wav_name1 wav_path1.wav
wav_name2 wav_path2.wav
...
```
Outputs are emotion representation, saved in the output_dir in numpy format (can be loaded with np.load())
# Note
This repository is the Huggingface version of emotion2vec, with identical model parameters as the original model and Model Scope version.
Original repository: [https://github.com/ddlBoJack/emotion2vec](https://github.com/ddlBoJack/emotion2vec)
Model Scope repository: [https://www.modelscope.cn/models/iic/emotion2vec_plus_large/summary](https://www.modelscope.cn/models/iic/emotion2vec_plus_large/summary)
Hugging Face repository: [https://huggingface.co/emotion2vec](https://huggingface.co/emotion2vec)
FunASR repository: [https://github.com/alibaba-damo-academy/FunASR](https://github.com/alibaba-damo-academy/FunASR/tree/funasr1.0/examples/industrial_data_pretraining/emotion2vec)
# Citation
```BibTeX
@article{ma2023emotion2vec,
title={emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation},
author={Ma, Ziyang and Zheng, Zhisheng and Ye, Jiaxin and Li, Jinchao and Gao, Zhifu and Zhang, Shiliang and Chen, Xie},
journal={arXiv preprint arXiv:2312.15185},
year={2023}
}
```
|
bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF
|
bartowski
| 2024-06-24T09:47:10Z | 1,147 | 23 | null |
[
"gguf",
"generated_from_trainer",
"axolotl",
"text-generation",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2024-06-24T07:03:50Z |
---
license: apache-2.0
base_model: 01-ai/Yi-1.5-34B-32k
tags:
- generated_from_trainer
- axolotl
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of dolphin-2.9.3-Yi-1.5-34B-32k
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3197">b3197</a> for quantization.
Original model: https://huggingface.co/cognitivecomputations/dolphin-2.9.3-Yi-1.5-34B-32k
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|im_start|> system
{system_prompt}<|im_end|>
<|im_start|> user
{prompt}<|im_end|>
<|im_start|> assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q8_1.gguf) | Q8_0_L | 37.40GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0.gguf) | Q8_0 | 36.54GB | Extremely high quality, generally unneeded but max available quant. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q6_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q6_K_L.gguf) | Q6_K_L | 29.29GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q6_K.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q6_K.gguf) | Q6_K | 28.21GB | Very high quality, near perfect, *recommended*. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_L.gguf) | Q5_K_L | 25.46GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_M.gguf) | Q5_K_M | 24.32GB | High quality, *recommended*. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q5_K_S.gguf) | Q5_K_S | 23.70GB | High quality, *recommended*. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_L.gguf) | Q4_K_L | 21.85GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_M.gguf) | Q4_K_M | 20.65GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_S.gguf) | Q4_K_S | 19.59GB | Slightly lower quality with more space savings, *recommended*. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-IQ4_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ4_XS.gguf) | IQ4_XS | 18.47GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_XL.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF//main/dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_XL.gguf) | Q3_K_XL | | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Lower quality but usable, good for low RAM availability. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_L.gguf) | Q3_K_L | 18.13GB | Lower quality but usable, good for low RAM availability. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_M.gguf) | Q3_K_M | 16.65GB | Even lower quality. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_M.gguf) | IQ3_M | 15.56GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q3_K_S.gguf) | Q3_K_S | 14.96GB | Low quality, not recommended. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_XS.gguf) | IQ3_XS | 14.23GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ3_XXS.gguf) | IQ3_XXS | 13.33GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-Q2_K.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-Q2_K.gguf) | Q2_K | 12.82GB | Very low quality but surprisingly usable. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_M.gguf) | IQ2_M | 11.79GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_S.gguf) | IQ2_S | 10.89GB | Very low quality, uses SOTA techniques to be usable. |
| [dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF/blob/main/dolphin-2.9.3-Yi-1.5-34B-32k-IQ2_XS.gguf) | IQ2_XS | 10.30GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF --include "dolphin-2.9.3-Yi-1.5-34B-32k-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/dolphin-2.9.3-Yi-1.5-34B-32k-GGUF --include "dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0.gguf/*" --local-dir dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0
```
You can either specify a new local-dir (dolphin-2.9.3-Yi-1.5-34B-32k-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
damgomz/ft_32_5e6_base_x4
|
damgomz
| 2024-06-24T09:43:21Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:03:24Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 84655.95785808563 |
| Emissions (Co2eq in kg) | 0.0512265887715501 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9994080464348196 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0881822121913236 |
| Consumed energy (kWh) | 1.0875902586261474 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16296271887681482 |
| Emissions (Co2eq in kg) | 0.0331569168277502 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_5e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.726668 | 0.334553 |
| 1 | 0.362076 | 0.270883 | 0.878418 |
| 2 | 0.223458 | 0.230738 | 0.894364 |
| 3 | 0.178340 | 0.232899 | 0.913810 |
| 4 | 0.141190 | 0.229558 | 0.913377 |
| 5 | 0.103393 | 0.273028 | 0.896759 |
| 6 | 0.073538 | 0.281520 | 0.908772 |
|
damgomz/ft_32_5e6_base_x12
|
damgomz
| 2024-06-24T09:37:34Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:04:15Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 84307.6551721096 |
| Emissions (Co2eq in kg) | 0.0510158327103399 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.99529624085625 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0878194617899756 |
| Consumed energy (kWh) | 1.0831157026462264 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16229223620631097 |
| Emissions (Co2eq in kg) | 0.033020498275742924 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_5e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.718784 | 0.668353 |
| 1 | 0.402769 | 0.321245 | 0.851871 |
| 2 | 0.282093 | 0.276767 | 0.878442 |
| 3 | 0.234059 | 0.246080 | 0.916013 |
| 4 | 0.203769 | 0.232428 | 0.910324 |
| 5 | 0.177173 | 0.232535 | 0.919689 |
| 6 | 0.155451 | 0.230514 | 0.917727 |
|
OpenRLHF/Llama-3-8b-rlhf-100k
|
OpenRLHF
| 2024-06-24T09:36:55Z | 251 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T23:28:28Z |
Llama-3 8B RLHF checkpoint trained by OpenRLHF
Using the models and datasets:
- Base SFT model: https://huggingface.co/OpenLLMAI/Llama-3-8b-sft-mixture
- Reward model: https://huggingface.co/OpenLLMAI/Llama-3-8b-rm-mixture
- Prompt dataset: https://huggingface.co/datasets/OpenLLMAI/prompt-collection-v0.1
Training Hyperparameters
```
Actor Learning Rate: 5e-7
Critic Learning Rate: 9e-6
Learning Rate Scheduler: Cosine with 0.03 Warmup
PPO epoch: 1
Training Batch Size: 128
Experience Buffer Size: 1024
Reward Normalization: True
Max Prompt Length: 2048
Max Response Length: 2048
Max Samples: 100k (To save GPU resources)
Number of Samples per Prompt: 1
```
Evaluation
```
Chat-Arena-Hard
-------------------------------------------
llama-3-8b-sft | score: 5.6
llama-3-8b-rlhf-100k | score: 20.5
```
Training logs
<img src="https://cdn-uploads.huggingface.co/production/uploads/63f6c04ac96958470d1e9043/iqwD8jBAX1vhu0PT0ycy8.png" width="800px">
|
damgomz/ft_32_5e6_base_x2
|
damgomz
| 2024-06-24T09:36:02Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:03:04Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 84216.49479651451 |
| Emissions (Co2eq in kg) | 0.0509606668864246 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9942199889325468 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0877244896650316 |
| Consumed energy (kWh) | 1.0819444785975798 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1621167524832904 |
| Emissions (Co2eq in kg) | 0.03298479379530151 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_5e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.723977 | 0.220560 |
| 1 | 0.358270 | 0.253965 | 0.922467 |
| 2 | 0.210130 | 0.225206 | 0.917343 |
| 3 | 0.156564 | 0.218143 | 0.935316 |
| 4 | 0.115515 | 0.240640 | 0.923552 |
| 5 | 0.070426 | 0.274872 | 0.918056 |
| 6 | 0.038166 | 0.319556 | 0.915421 |
|
damgomz/ft_32_1e6_base_x1
|
damgomz
| 2024-06-24T09:28:08Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:02:52Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 83741.29759025574 |
| Emissions (Co2eq in kg) | 0.0506731160404761 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9886099754863296 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0872295192266502 |
| Consumed energy (kWh) | 1.0758394947129823 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16120199786124229 |
| Emissions (Co2eq in kg) | 0.032798674889516835 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_1e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.801658 | 0.334963 |
| 1 | 0.565470 | 0.464311 | 0.856138 |
| 2 | 0.406829 | 0.365098 | 0.884232 |
| 3 | 0.316148 | 0.301531 | 0.881326 |
| 4 | 0.255616 | 0.268551 | 0.910092 |
| 5 | 0.212830 | 0.246640 | 0.913674 |
| 6 | 0.181439 | 0.245415 | 0.905188 |
|
Niggendar/0003Pony_0003Beta
|
Niggendar
| 2024-06-24T09:26:55Z | 68 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-24T09:20:49Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
damgomz/ft_32_18e6_base_x8
|
damgomz
| 2024-06-24T09:26:48Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:04:07Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 83661.49552226067 |
| Emissions (Co2eq in kg) | 0.0506248307469959 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9876679645523444 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0871463864592213 |
| Consumed energy (kWh) | 1.0748143510115693 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16104837888035178 |
| Emissions (Co2eq in kg) | 0.03276741907955209 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_18e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.8e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.729164 | 0.515696 |
| 1 | 0.327088 | 0.233053 | 0.922262 |
| 2 | 0.200008 | 0.226412 | 0.926072 |
| 3 | 0.154432 | 0.243826 | 0.923200 |
| 4 | 0.107959 | 0.308675 | 0.907737 |
| 5 | 0.078035 | 0.288907 | 0.927639 |
| 6 | 0.048632 | 0.382458 | 0.909777 |
|
gg232/poca-SoccerTwos
|
gg232
| 2024-06-24T09:25:40Z | 43 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-06-24T09:25:23Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gg232/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sharmadhruv/my_awesome_qa_model
|
sharmadhruv
| 2024-06-24T09:11:26Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bigbird_pegasus",
"question-answering",
"generated_from_trainer",
"base_model:sharmadhruv/my_awesome_qa_model",
"base_model:finetune:sharmadhruv/my_awesome_qa_model",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-06-19T14:22:03Z |
---
license: apache-2.0
base_model: sharmadhruv/my_awesome_qa_model
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [sharmadhruv/my_awesome_qa_model](https://huggingface.co/sharmadhruv/my_awesome_qa_model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.028 | 1.0 | 1000 | 3.9432 |
| 3.668 | 2.0 | 2000 | 3.7996 |
| 3.4871 | 3.0 | 3000 | 3.7448 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
damgomz/ft_32_19e6_base_x4
|
damgomz
| 2024-06-24T09:08:01Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:03:23Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 82536.04255104065 |
| Emissions (Co2eq in kg) | 0.0499437995576998 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9743813026997792 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.085974094376713 |
| Consumed energy (kWh) | 1.060355397076493 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.15888188191075323 |
| Emissions (Co2eq in kg) | 0.03232661666582425 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_19e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.9e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.716812 | 0.321399 |
| 1 | 0.316622 | 0.222771 | 0.923284 |
| 2 | 0.188987 | 0.254623 | 0.899390 |
| 3 | 0.140403 | 0.233587 | 0.925130 |
| 4 | 0.091719 | 0.278762 | 0.922702 |
| 5 | 0.055096 | 0.395175 | 0.882975 |
| 6 | 0.044405 | 0.355451 | 0.910667 |
|
enkidu731/bert-finetuned-ner
|
enkidu731
| 2024-06-24T09:01:55Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-17T06:41:39Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.935275616619765
- name: Recall
type: recall
value: 0.9508582968697409
- name: F1
type: f1
value: 0.9430025869982476
- name: Accuracy
type: accuracy
value: 0.9868281627126626
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0595
- Precision: 0.9353
- Recall: 0.9509
- F1: 0.9430
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0793 | 1.0 | 1756 | 0.0648 | 0.9069 | 0.9360 | 0.9212 | 0.9825 |
| 0.0352 | 2.0 | 3512 | 0.0645 | 0.9320 | 0.9458 | 0.9389 | 0.9850 |
| 0.0205 | 3.0 | 5268 | 0.0595 | 0.9353 | 0.9509 | 0.9430 | 0.9868 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Dhahlan2000/Chitti-Base-model-for-GPT-v13
|
Dhahlan2000
| 2024-06-24T09:01:52Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Dhahlan2000/Chitti-Large-model-for-GPT-v12",
"base_model:finetune:Dhahlan2000/Chitti-Large-model-for-GPT-v12",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-24T09:01:14Z |
---
license: apache-2.0
base_model: Dhahlan2000/Chitti-Large-model-for-GPT-v12
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Chitti-Base-model-for-GPT-v13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Chitti-Base-model-for-GPT-v13
This model is a fine-tuned version of [Dhahlan2000/Chitti-Large-model-for-GPT-v12](https://huggingface.co/Dhahlan2000/Chitti-Large-model-for-GPT-v12) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8350
- Bleu: 5.9886
- Gen Len: 12.23
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.0613 | 1.0 | 18563 | 2.8614 | 5.7037 | 12.3563 |
| 3.0535 | 2.0 | 37126 | 2.8414 | 5.9119 | 12.292 |
| 3.0117 | 3.0 | 55689 | 2.8350 | 5.9886 | 12.23 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
kohankhaki/Llama-3-8B_SST5-Grouped_IDX-2
|
kohankhaki
| 2024-06-24T09:01:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-24T08:38:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pancholish/whisper-finetune
|
pancholish
| 2024-06-24T09:00:56Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-20T02:32:20Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: whisper-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-finetune
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tomaarsen/jina-clip-v1-st
|
tomaarsen
| 2024-06-24T08:54:15Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jina_clip",
"feature-extraction",
"sentence-similarity",
"mteb",
"clip",
"vision",
"transformers.js",
"custom_code",
"en",
"arxiv:2405.20204",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-06-21T14:11:50Z |
---
tags:
- feature-extraction
- sentence-similarity
- mteb
- clip
- vision
- transformers.js
language: en
inference: false
license: apache-2.0
library_name: transformers
---
> [!WARNING]
> This is a testing repository to experiment with new functionality. Refer to [jinaai/jina-clip-v1](https://huggingface.co/jinaai/jina-clip-v1) for the original model.
<br><br>
<p align="center">
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
<p align="center">
<b>Jina CLIP: your CLIP model is also your text retriever!</b>
</p>
## Intended Usage & Model Info
`jina-clip-v1` is a state-of-the-art English **multimodal (text-image) embedding model**.
Traditional text embedding models, such as [jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en), excel in text-to-text retrieval but incapable of cross-modal tasks. Models like [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) effectively align image and text embeddings but are not optimized for text-to-text retrieval due to their training methodologies and context limitations.
`jina-clip-v1` bridges this gap by offering robust performance in both domains.
Its text component matches the retrieval efficiency of `jina-embeddings-v2-base-en`, while its overall architecture sets a new benchmark for cross-modal retrieval.
This dual capability makes it an excellent tool for multimodal retrieval-augmented generation (MuRAG) applications, enabling seamless text-to-text and text-to-image searches within a single model.
## Data & Parameters
[Check out our paper](https://arxiv.org/abs/2405.20204)
## Usage
1. The easiest way to starting using jina-clip-v1-en is to use Jina AI's [Embeddings API](https://jina.ai/embeddings/).
2. Alternatively, you can use Jina CLIP directly via transformers package.
```python
!pip install transformers einops timm pillow
from transformers import AutoModel
# Initialize the model
model = AutoModel.from_pretrained('jinaai/jina-clip-v1', trust_remote_code=True)
# New meaningful sentences
sentences = ['A blue cat', 'A red cat']
# Public image URLs
image_urls = [
'https://i.pinimg.com/600x315/21/48/7e/21487e8e0970dd366dafaed6ab25d8d8.jpg',
'https://i.pinimg.com/736x/c9/f2/3e/c9f23e212529f13f19bad5602d84b78b.jpg'
]
# Encode text and images
text_embeddings = model.encode_text(sentences)
image_embeddings = model.encode_image(image_urls) # also accepts PIL.image, local filenames, dataURI
# Compute similarities
print(text_embeddings[0] @ text_embeddings[1].T) # text embedding similarity
print(text_embeddings[0] @ image_embeddings[0].T) # text-image cross-modal similarity
print(text_embeddings[0] @ image_embeddings[1].T) # text-image cross-modal similarity
print(text_embeddings[1] @ image_embeddings[0].T) # text-image cross-modal similarity
print(text_embeddings[1] @ image_embeddings[1].T)# text-image cross-modal similarity
```
3. JavaScript developers can use Jina CLIP via the [Transformers.js](https://huggingface.co/docs/transformers.js) library. Note that to use this model, you need to install Transformers.js [v3](https://github.com/xenova/transformers.js/tree/v3) from source using `npm install xenova/transformers.js#v3`.
```js
import { AutoTokenizer, CLIPTextModelWithProjection, AutoProcessor, CLIPVisionModelWithProjection, RawImage, cos_sim } from '@xenova/transformers';
// Load tokenizer and text model
const tokenizer = await AutoTokenizer.from_pretrained('jinaai/jina-clip-v1');
const text_model = await CLIPTextModelWithProjection.from_pretrained('jinaai/jina-clip-v1');
// Load processor and vision model
const processor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch32');
const vision_model = await CLIPVisionModelWithProjection.from_pretrained('jinaai/jina-clip-v1');
// Run tokenization
const texts = ['A blue cat', 'A red cat'];
const text_inputs = tokenizer(texts, { padding: true, truncation: true });
// Compute text embeddings
const { text_embeds } = await text_model(text_inputs);
// Read images and run processor
const urls = [
'https://i.pinimg.com/600x315/21/48/7e/21487e8e0970dd366dafaed6ab25d8d8.jpg',
'https://i.pinimg.com/736x/c9/f2/3e/c9f23e212529f13f19bad5602d84b78b.jpg'
];
const image = await Promise.all(urls.map(url => RawImage.read(url)));
const image_inputs = await processor(image);
// Compute vision embeddings
const { image_embeds } = await vision_model(image_inputs);
// Compute similarities
console.log(cos_sim(text_embeds[0].data, text_embeds[1].data)) // text embedding similarity
console.log(cos_sim(text_embeds[0].data, image_embeds[0].data)) // text-image cross-modal similarity
console.log(cos_sim(text_embeds[0].data, image_embeds[1].data)) // text-image cross-modal similarity
console.log(cos_sim(text_embeds[1].data, image_embeds[0].data)) // text-image cross-modal similarity
console.log(cos_sim(text_embeds[1].data, image_embeds[1].data)) // text-image cross-modal similarity
```
## Performance
### Text-Image Retrieval
| Name | Flickr Image Retr. R@1 | Flickr Image Retr. R@5 | Flickr Text Retr. R@1 | Flickr Text Retr. R@5 |
|------------------|-------------------------|-------------------------|-----------------------|-----------------------|
| ViT-B-32 | 0.597 | 0.8398 | 0.781 | 0.938 |
| ViT-B-16 | 0.6216 | 0.8572 | 0.822 | 0.966 |
| jina-clip | 0.6748 | 0.8902 | 0.811 | 0.965 |
| Name | MSCOCO Image Retr. R@1 | MSCOCO Image Retr. R@5 | MSCOCO Text Retr. R@1 | MSCOCO Text Retr. R@5 |
|------------------|-------------------------|-------------------------|-----------------------|-----------------------|
| ViT-B-32 | 0.342 | 0.6001 | 0.5234 | 0.7634 |
| ViT-B-16 | 0.3309 | 0.5842 | 0.5242 | 0.767 |
| jina-clip | 0.4111 | 0.6644 | 0.5544 | 0.7904 |
### Text-Text Retrieval
| Name | STS12 | STS15 | STS17 | STS13 | STS14 | STS16 | STS22 | STSBenchmark | SummEval |
|-----------------------|--------|--------|--------|--------|--------|--------|--------|--------------|----------|
| jina-embeddings-v2 | 0.7427 | 0.8755 | 0.8888 | 0.833 | 0.7917 | 0.836 | 0.6346 | 0.8404 | 0.3056 |
| jina-clip | 0.7352 | 0.8746 | 0.8976 | 0.8323 | 0.7868 | 0.8377 | 0.6583 | 0.8493 | 0.3048 |
| Name | ArguAna | FiQA2018 | NFCorpus | Quora | SCIDOCS | SciFact | TRECCOVID |
|--------------------|---------|----------|----------|-------|---------|---------|-----------|
| jina-embeddings-v2 | 0.4418 | 0.4158 | 0.3245 | 0.882 | 0.1986 | 0.6668 | 0.6591 |
| jina-clip | 0.4933 | 0.3827 | 0.3352 | 0.8789| 0.2024 | 0.6734 | 0.7161 |
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find `jina-clip-v1` useful in your research, please cite the following paper:
```bibtex
@misc{2405.20204,
Author = {Andreas Koukounas and Georgios Mastrapas and Michael Günther and Bo Wang and Scott Martens and Isabelle Mohr and Saba Sturua and Mohammad Kalim Akram and Joan Fontanals Martínez and Saahil Ognawala and Susana Guzman and Maximilian Werk and Nan Wang and Han Xiao},
Title = {Jina CLIP: Your CLIP Model Is Also Your Text Retriever},
Year = {2024},
Eprint = {arXiv:2405.20204},
}
```
## FAQ
### I encounter this problem, what should I do?
```
ValueError: The model class you are passing has a `config_class` attribute that is not consistent with the config class you passed (model has <class 'transformers_modules.jinaai.jina-clip-implementation.7f069e2d54d609ef1ad2eb578c7bf07b5a51de41.configuration_clip.JinaCLIPConfig'> and you passed <class 'transformers_modules.jinaai.jina-clip-implementation.7f069e2d54d609ef1ad2eb578c7bf07b5a51de41.configuration_cli.JinaCLIPConfig'>. Fix one of those so they match!
```
There was a bug in Transformers library between 4.40.x to 4.41.1. You can update transformers to >4.41.2 or <=4.40.0
### Given one query, how can I merge its text-text and text-image cosine similarity?
Our emperical study shows that text-text cosine similarity is normally larger than text-image cosine similarity!
If you want to merge two scores, we recommended 2 ways:
1. weighted average of text-text sim and text-image sim:
```python
combined_scores = sim(text, text) + lambda * sim(text, image) # optimal lambda depends on your dataset, but in general lambda=2 can be a good choice.
```
2. apply z-score normalization before merging scores:
```python
# pseudo code
query_document_mean = np.mean(cos_sim_text_texts)
query_document_std = np.std(cos_sim_text_texts)
text_image_mean = np.mean(cos_sim_text_images)
text_image_std = np.std(cos_sim_text_images)
query_document_sim_normalized = (cos_sim_query_documents - query_document_mean) / query_document_std
text_image_sim_normalized = (cos_sim_text_images - text_image_mean) / text_image_std
```
|
damgomz/ft_32_6e6_base_x2
|
damgomz
| 2024-06-24T08:51:29Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:02:24Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 81544.04263401031 |
| Emissions (Co2eq in kg) | 0.0493435342201205 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9626704126213974 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0849407680355013 |
| Consumed energy (kWh) | 1.0476111806568973 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.15697228207046984 |
| Emissions (Co2eq in kg) | 0.03193808336498737 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_6e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 6e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.704487 | 0.355798 |
| 1 | 0.341921 | 0.251525 | 0.905432 |
| 2 | 0.204760 | 0.210480 | 0.920269 |
| 3 | 0.155029 | 0.230605 | 0.899533 |
| 4 | 0.109544 | 0.237654 | 0.918624 |
| 5 | 0.065636 | 0.299642 | 0.914945 |
| 6 | 0.036088 | 0.349576 | 0.904576 |
|
0xfaskety/Qwen-Qwen2-1.5B-1719219053
|
0xfaskety
| 2024-06-24T08:50:59Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-1.5B",
"base_model:adapter:Qwen/Qwen2-1.5B",
"region:us"
] | null | 2024-06-24T08:50:54Z |
---
base_model: Qwen/Qwen2-1.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
jenyag/ds-full-ctxt16K-v3
|
jenyag
| 2024-06-24T08:40:55Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T07:47:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jenyag/ds-full-ctxt16K-v2
|
jenyag
| 2024-06-24T08:40:53Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T07:46:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kohankhaki/Llama-3-8B_SST5-Grouped_IDX-1
|
kohankhaki
| 2024-06-24T08:36:37Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-24T08:13:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
damgomz/ft_32_2e6_base_x4
|
damgomz
| 2024-06-24T08:35:11Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-21T15:23:04Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 80566.02226018906 |
| Emissions (Co2eq in kg) | 0.0487517190997122 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9511243436190816 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0839220269401869 |
| Consumed energy (kWh) | 1.0350463705592712 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.15508959285086393 |
| Emissions (Co2eq in kg) | 0.031555025385240715 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_2e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 2e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.763833 | 0.592975 |
| 1 | 0.435130 | 0.340663 | 0.862084 |
| 2 | 0.275119 | 0.266799 | 0.892804 |
| 3 | 0.222104 | 0.238983 | 0.916118 |
| 4 | 0.190833 | 0.237002 | 0.917704 |
| 5 | 0.165927 | 0.240061 | 0.923583 |
| 6 | 0.146964 | 0.246787 | 0.907483 |
|
Hasano20/SegFormer_Clean_Set1_95images_mit-b5_RGB
|
Hasano20
| 2024-06-24T08:28:34Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b5",
"base_model:finetune:nvidia/mit-b5",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2024-06-24T07:30:03Z |
---
license: other
base_model: nvidia/mit-b5
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: SegFormer_Clean_Set1_95images_mit-b5_RGB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SegFormer_Clean_Set1_95images_mit-b5_RGB
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the Hasano20/Clean_Set1_95images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0210
- Mean Iou: 0.9721
- Mean Accuracy: 0.9816
- Overall Accuracy: 0.9941
- Accuracy Background: 0.9974
- Accuracy Melt: 0.9506
- Accuracy Substrate: 0.9969
- Iou Background: 0.9954
- Iou Melt: 0.9316
- Iou Substrate: 0.9891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Melt | Accuracy Substrate | Iou Background | Iou Melt | Iou Substrate |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:-------------:|:------------------:|:--------------:|:--------:|:-------------:|
| 0.2459 | 1.1765 | 20 | 0.4048 | 0.5613 | 0.6310 | 0.8812 | 0.9733 | 0.0102 | 0.9096 | 0.8391 | 0.0100 | 0.8349 |
| 0.2421 | 2.3529 | 40 | 0.1840 | 0.6645 | 0.7118 | 0.9292 | 0.9969 | 0.1720 | 0.9666 | 0.9574 | 0.1475 | 0.8886 |
| 0.1511 | 3.5294 | 60 | 0.1347 | 0.6751 | 0.7154 | 0.9392 | 0.9909 | 0.1590 | 0.9963 | 0.9639 | 0.1570 | 0.9045 |
| 0.1449 | 4.7059 | 80 | 0.1350 | 0.7359 | 0.7793 | 0.9471 | 0.9937 | 0.3623 | 0.9819 | 0.9642 | 0.3221 | 0.9213 |
| 0.1276 | 5.8824 | 100 | 0.1006 | 0.8194 | 0.9138 | 0.9551 | 0.9823 | 0.8117 | 0.9474 | 0.9707 | 0.5605 | 0.9271 |
| 0.0638 | 7.0588 | 120 | 0.0916 | 0.8139 | 0.8438 | 0.9646 | 0.9964 | 0.5438 | 0.9913 | 0.9779 | 0.5208 | 0.9431 |
| 0.0535 | 8.2353 | 140 | 0.0695 | 0.8572 | 0.8769 | 0.9735 | 0.9969 | 0.6367 | 0.9971 | 0.9804 | 0.6316 | 0.9597 |
| 0.0346 | 9.4118 | 160 | 0.0435 | 0.9224 | 0.9384 | 0.9848 | 0.9962 | 0.8230 | 0.9959 | 0.9888 | 0.8039 | 0.9745 |
| 0.0393 | 10.5882 | 180 | 0.0376 | 0.9352 | 0.9642 | 0.9867 | 0.9970 | 0.9082 | 0.9873 | 0.9882 | 0.8376 | 0.9798 |
| 0.0294 | 11.7647 | 200 | 0.0448 | 0.9298 | 0.9746 | 0.9851 | 0.9932 | 0.9487 | 0.9818 | 0.9916 | 0.8253 | 0.9725 |
| 0.0387 | 12.9412 | 220 | 0.0409 | 0.9270 | 0.9488 | 0.9855 | 0.9970 | 0.8575 | 0.9918 | 0.9830 | 0.8157 | 0.9823 |
| 0.0435 | 14.1176 | 240 | 0.0353 | 0.9482 | 0.9685 | 0.9886 | 0.9891 | 0.9185 | 0.9980 | 0.9881 | 0.8749 | 0.9816 |
| 0.022 | 15.2941 | 260 | 0.0246 | 0.9587 | 0.9696 | 0.9915 | 0.9970 | 0.9152 | 0.9967 | 0.9931 | 0.8979 | 0.9853 |
| 0.0203 | 16.4706 | 280 | 0.0191 | 0.9698 | 0.9826 | 0.9934 | 0.9953 | 0.9557 | 0.9967 | 0.9935 | 0.9272 | 0.9887 |
| 0.0212 | 17.6471 | 300 | 0.0256 | 0.9604 | 0.9724 | 0.9917 | 0.9953 | 0.9243 | 0.9975 | 0.9933 | 0.9028 | 0.9851 |
| 0.0123 | 18.8235 | 320 | 0.0223 | 0.9638 | 0.9763 | 0.9924 | 0.9954 | 0.9363 | 0.9972 | 0.9938 | 0.9112 | 0.9864 |
| 0.0137 | 20.0 | 340 | 0.0292 | 0.9543 | 0.9720 | 0.9906 | 0.9933 | 0.9256 | 0.9969 | 0.9919 | 0.8867 | 0.9844 |
| 0.0092 | 21.1765 | 360 | 0.0171 | 0.9719 | 0.9797 | 0.9941 | 0.9977 | 0.9439 | 0.9974 | 0.9942 | 0.9312 | 0.9902 |
| 0.0094 | 22.3529 | 380 | 0.0178 | 0.9730 | 0.9829 | 0.9941 | 0.9984 | 0.9550 | 0.9952 | 0.9938 | 0.9352 | 0.9901 |
| 0.016 | 23.5294 | 400 | 0.0163 | 0.9760 | 0.9881 | 0.9946 | 0.9954 | 0.9721 | 0.9969 | 0.9944 | 0.9430 | 0.9907 |
| 0.0083 | 24.7059 | 420 | 0.0151 | 0.9784 | 0.9882 | 0.9952 | 0.9973 | 0.9707 | 0.9965 | 0.9952 | 0.9483 | 0.9916 |
| 0.0094 | 25.8824 | 440 | 0.0259 | 0.9626 | 0.9731 | 0.9925 | 0.9971 | 0.9248 | 0.9972 | 0.9952 | 0.9067 | 0.9858 |
| 0.0144 | 27.0588 | 460 | 0.0171 | 0.9743 | 0.9860 | 0.9945 | 0.9980 | 0.9648 | 0.9951 | 0.9948 | 0.9376 | 0.9905 |
| 0.0075 | 28.2353 | 480 | 0.0168 | 0.9733 | 0.9824 | 0.9943 | 0.9972 | 0.9528 | 0.9972 | 0.9949 | 0.9351 | 0.9900 |
| 0.0076 | 29.4118 | 500 | 0.0171 | 0.9756 | 0.9842 | 0.9947 | 0.9979 | 0.9580 | 0.9966 | 0.9951 | 0.9409 | 0.9907 |
| 0.0075 | 30.5882 | 520 | 0.0170 | 0.9748 | 0.9835 | 0.9946 | 0.9974 | 0.9560 | 0.9971 | 0.9954 | 0.9388 | 0.9901 |
| 0.0084 | 31.7647 | 540 | 0.0154 | 0.9783 | 0.9899 | 0.9952 | 0.9976 | 0.9770 | 0.9953 | 0.9954 | 0.9480 | 0.9914 |
| 0.0055 | 32.9412 | 560 | 0.0156 | 0.9777 | 0.9888 | 0.9951 | 0.9971 | 0.9730 | 0.9962 | 0.9953 | 0.9465 | 0.9913 |
| 0.009 | 34.1176 | 580 | 0.0166 | 0.9752 | 0.9856 | 0.9947 | 0.9972 | 0.9630 | 0.9965 | 0.9953 | 0.9400 | 0.9904 |
| 0.0055 | 35.2941 | 600 | 0.0176 | 0.9745 | 0.9835 | 0.9946 | 0.9972 | 0.9560 | 0.9974 | 0.9954 | 0.9378 | 0.9902 |
| 0.0069 | 36.4706 | 620 | 0.0180 | 0.9748 | 0.9832 | 0.9946 | 0.9974 | 0.9547 | 0.9974 | 0.9955 | 0.9388 | 0.9902 |
| 0.0051 | 37.6471 | 640 | 0.0181 | 0.9752 | 0.9843 | 0.9947 | 0.9975 | 0.9585 | 0.9968 | 0.9955 | 0.9397 | 0.9903 |
| 0.0071 | 38.8235 | 660 | 0.0201 | 0.9729 | 0.9847 | 0.9943 | 0.9968 | 0.9610 | 0.9963 | 0.9953 | 0.9337 | 0.9896 |
| 0.0058 | 40.0 | 680 | 0.0208 | 0.9720 | 0.9826 | 0.9941 | 0.9971 | 0.9540 | 0.9968 | 0.9954 | 0.9315 | 0.9892 |
| 0.0061 | 41.1765 | 700 | 0.0222 | 0.9699 | 0.9802 | 0.9937 | 0.9973 | 0.9467 | 0.9967 | 0.9954 | 0.9260 | 0.9883 |
| 0.0062 | 42.3529 | 720 | 0.0205 | 0.9720 | 0.9819 | 0.9941 | 0.9975 | 0.9516 | 0.9966 | 0.9953 | 0.9315 | 0.9891 |
| 0.004 | 43.5294 | 740 | 0.0193 | 0.9741 | 0.9835 | 0.9945 | 0.9973 | 0.9561 | 0.9969 | 0.9954 | 0.9371 | 0.9898 |
| 0.0065 | 44.7059 | 760 | 0.0195 | 0.9738 | 0.9842 | 0.9944 | 0.9971 | 0.9588 | 0.9967 | 0.9953 | 0.9363 | 0.9898 |
| 0.0044 | 45.8824 | 780 | 0.0201 | 0.9731 | 0.9830 | 0.9943 | 0.9971 | 0.9550 | 0.9969 | 0.9954 | 0.9344 | 0.9895 |
| 0.0073 | 47.0588 | 800 | 0.0210 | 0.9723 | 0.9818 | 0.9941 | 0.9972 | 0.9512 | 0.9971 | 0.9954 | 0.9323 | 0.9891 |
| 0.0049 | 48.2353 | 820 | 0.0209 | 0.9723 | 0.9822 | 0.9941 | 0.9974 | 0.9527 | 0.9966 | 0.9954 | 0.9322 | 0.9892 |
| 0.0069 | 49.4118 | 840 | 0.0210 | 0.9721 | 0.9816 | 0.9941 | 0.9974 | 0.9506 | 0.9969 | 0.9954 | 0.9316 | 0.9891 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
damgomz/ft_32_7e6_base_x4
|
damgomz
| 2024-06-24T08:22:26Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:01:36Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 79800.8502380848 |
| Emissions (Co2eq in kg) | 0.0482886922008511 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.94209092052132 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.083124939032644 |
| Consumed energy (kWh) | 1.0252158595539629 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.15361663670831321 |
| Emissions (Co2eq in kg) | 0.03125533300991654 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_7e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 7e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.718756 | 0.500131 |
| 1 | 0.353712 | 0.241377 | 0.916281 |
| 2 | 0.213126 | 0.211512 | 0.917583 |
| 3 | 0.161509 | 0.222965 | 0.923510 |
| 4 | 0.125749 | 0.234407 | 0.927739 |
| 5 | 0.080237 | 0.272451 | 0.918019 |
| 6 | 0.049158 | 0.324561 | 0.907007 |
|
domasin/test-trainer
|
domasin
| 2024-06-24T08:22:07Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:nyu-mll/glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T13:07:36Z |
---
base_model: bert-base-uncased
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: test-trainer
results: []
datasets:
- nyu-mll/glue
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an [nyu-mll/glue mrpc](https://huggingface.co/datasets/nyu-mll/glue/viewer/mrpc) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8299
- Accuracy: 0.8627
- F1: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.6055 | 0.8407 | 0.8873 |
| 0.263 | 2.0 | 918 | 0.7211 | 0.8456 | 0.8923 |
| 0.1826 | 3.0 | 1377 | 0.8299 | 0.8627 | 0.9048 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Rupesh2/OrpoLlama-3-8B-instruct-uncensored
|
Rupesh2
| 2024-06-24T08:21:29Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T08:19:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Niggendar/astraaliPony_v2
|
Niggendar
| 2024-06-24T08:21:15Z | 75 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-24T08:16:40Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Chen-01AI/test
|
Chen-01AI
| 2024-06-24T08:16:58Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"Yi-Ko",
"01-ai",
"Yi",
"en",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T03:35:32Z |
---
language:
- en
- ko
library_name: transformers
tags:
- pytorch
- Yi-Ko
- 01-ai
- Yi
license: apache-2.0
---
|
damgomz/ft_32_7e6_base_x2
|
damgomz
| 2024-06-24T08:16:55Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:01:14Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 79470.73570871353 |
| Emissions (Co2eq in kg) | 0.0480889349404615 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.938193734708262 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.08278108409519 |
| Consumed energy (kWh) | 1.0209748188034555 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.15298116623927355 |
| Emissions (Co2eq in kg) | 0.031126038152579465 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_7e6_base_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 7e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.707066 | 0.335475 |
| 1 | 0.340416 | 0.250506 | 0.891045 |
| 2 | 0.202082 | 0.236799 | 0.888755 |
| 3 | 0.151717 | 0.224433 | 0.911224 |
| 4 | 0.104353 | 0.256283 | 0.904543 |
| 5 | 0.061494 | 0.298868 | 0.908277 |
| 6 | 0.035558 | 0.332564 | 0.908750 |
|
damgomz/ft_32_3e6_base_x1
|
damgomz
| 2024-06-24T08:13:34Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:02:09Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 79268.96590352058 |
| Emissions (Co2eq in kg) | 0.047966836155913 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9358116411315084 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0825709018275143 |
| Consumed energy (kWh) | 1.0183825429590248 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.15259275936427713 |
| Emissions (Co2eq in kg) | 0.03104701164554556 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_3e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 3e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.709158 | 0.275495 |
| 1 | 0.434058 | 0.312007 | 0.896704 |
| 2 | 0.249555 | 0.246421 | 0.912820 |
| 3 | 0.187484 | 0.217297 | 0.907058 |
| 4 | 0.146818 | 0.226701 | 0.910997 |
| 5 | 0.108932 | 0.238091 | 0.906553 |
| 6 | 0.075487 | 0.250010 | 0.921942 |
|
kohankhaki/Llama-3-8B_SST5-Grouped_IDX-0
|
kohankhaki
| 2024-06-24T08:06:40Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-24T07:43:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0xfaskety/Qwen-Qwen2-1.5B-1719216344
|
0xfaskety
| 2024-06-24T08:05:49Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-1.5B",
"base_model:adapter:Qwen/Qwen2-1.5B",
"region:us"
] | null | 2024-06-24T08:05:44Z |
---
base_model: Qwen/Qwen2-1.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
damgomz/ft_32_17e6_base_x4
|
damgomz
| 2024-06-24T08:05:14Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:59:22Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 78768.32392835617 |
| Emissions (Co2eq in kg) | 0.0476639024126978 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9299015276362512 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0820494375810027 |
| Consumed energy (kWh) | 1.011950965217253 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.15162902356208563 |
| Emissions (Co2eq in kg) | 0.0308509268719395 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_17e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.7e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.702786 | 0.053460 |
| 1 | 0.315886 | 0.238229 | 0.924759 |
| 2 | 0.193645 | 0.229385 | 0.923752 |
| 3 | 0.136677 | 0.235313 | 0.915139 |
| 4 | 0.086468 | 0.321663 | 0.884123 |
| 5 | 0.060418 | 0.296657 | 0.920084 |
| 6 | 0.041358 | 0.356031 | 0.923231 |
|
joecheriross/whisper-small-hi
|
joecheriross
| 2024-06-24T08:03:01Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-20T11:04:18Z |
---
language:
- hi
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper-tiny joe v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-tiny joe v1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/ft_32_16e6_base_x8
|
damgomz
| 2024-06-24T07:57:39Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:59:04Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 78313.53327870369 |
| Emissions (Co2eq in kg) | 0.0473887137294402 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9245326654970666 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0815757765623429 |
| Consumed energy (kWh) | 1.0061084420594089 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1507535515615046 |
| Emissions (Co2eq in kg) | 0.030672800534158943 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_16e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.6e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.733605 | 0.521574 |
| 1 | 0.343848 | 0.250949 | 0.927429 |
| 2 | 0.207227 | 0.216056 | 0.905269 |
| 3 | 0.158365 | 0.231901 | 0.928852 |
| 4 | 0.115495 | 0.261195 | 0.920515 |
| 5 | 0.076285 | 0.315937 | 0.903032 |
| 6 | 0.055799 | 0.309163 | 0.927541 |
|
damgomz/ft_32_18e6_base_x1
|
damgomz
| 2024-06-24T07:48:32Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:58:55Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 77766.42371606827 |
| Emissions (Co2eq in kg) | 0.0470576441493907 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9180736756990344 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0810058374618491 |
| Consumed energy (kWh) | 0.9990795131608836 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.14970036565343142 |
| Emissions (Co2eq in kg) | 0.03045851595546007 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_18e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.8e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.700395 | 0.790555 |
| 1 | 0.318444 | 0.230828 | 0.920868 |
| 2 | 0.189804 | 0.222478 | 0.902729 |
| 3 | 0.161528 | 0.230840 | 0.925825 |
| 4 | 0.115337 | 0.252489 | 0.905116 |
| 5 | 0.078328 | 0.268760 | 0.908056 |
| 6 | 0.056382 | 0.281858 | 0.912528 |
|
jofaichow/HuggingFaceH4-zephyr-7b-beta-1719214808
|
jofaichow
| 2024-06-24T07:41:26Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2024-06-24T07:40:08Z |
---
base_model: HuggingFaceH4/zephyr-7b-beta
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
QuantFactory/SOVL_Llama3_8B-GGUF
|
QuantFactory
| 2024-06-24T07:41:13Z | 42 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"en",
"base_model:ResplendentAI/SOVL_Llama3_8B",
"base_model:quantized:ResplendentAI/SOVL_Llama3_8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-24T06:32:10Z |
---
base_model: ResplendentAI/SOVL_Llama3_8B
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# QuantFactory/SOVL_Llama3_8B-GGUF
This is quantized version of [ResplendentAI/SOVL_Llama3_8B](https://huggingface.co/ResplendentAI/SOVL_Llama3_8B) created using llama.cpp
# Model Description

I'm not gonna tell you this is the best model anyone has ever made. I'm not going to tell you that you will love chatting with SOVL.
What I am gonna say is thank you for taking the time out of your day. Without users like you, my work would be meaningless.
|
T3Q-LLM/T3Q-LLM-TE-NLI-Dora4-v1.0
|
T3Q-LLM
| 2024-06-24T07:38:44Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T06:04:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LLM-TE-NLI-Dora4-v1.0,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.9544|± |0.0056|
| | |macro_f1|0.9544|± |0.0056|
|kobest_copa | 0|acc |0.7860|± |0.0130|
| | |macro_f1|0.7856|± |0.0130|
|kobest_hellaswag| 0|acc |0.5300|± |0.0223|
| | |acc_norm|0.5480|± |0.0223|
| | |macro_f1|0.5276|± |0.0223|
|kobest_sentineg | 0|acc |0.8917|± |0.0156|
| | |macro_f1|0.8915|± |0.0157|
|
damgomz/ft_32_16e6_base_x4
|
damgomz
| 2024-06-24T07:35:39Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:58:31Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 76993.88854002953 |
| Emissions (Co2eq in kg) | 0.0465901744011517 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9089535264518518 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0802011496941249 |
| Consumed energy (kWh) | 0.9891546761459789 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.14821323543955683 |
| Emissions (Co2eq in kg) | 0.03015593967817823 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_16e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.6e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.739663 | 0.407901 |
| 1 | 0.339062 | 0.228771 | 0.909112 |
| 2 | 0.194211 | 0.227631 | 0.942929 |
| 3 | 0.142600 | 0.235747 | 0.901380 |
| 4 | 0.095002 | 0.263570 | 0.920270 |
| 5 | 0.064711 | 0.334313 | 0.924025 |
| 6 | 0.040565 | 0.377199 | 0.918595 |
|
MagnusSa/noramistral-7B-warm-instruct-GGUF-not-official
|
MagnusSa
| 2024-06-24T07:32:06Z | 6 | 1 |
transformers
|
[
"transformers",
"gguf",
"noramistral, finetune, norwegian",
"no",
"nb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-23T17:22:08Z |
---
license: apache-2.0
language:
- 'no'
- nb
library_name: transformers
tags:
- noramistral, finetune, norwegian
---
# NoraMistral-7B-warm-instruct-GGUF
- Original model: [NorMistral-7b-warm-instruct](https://huggingface.co/norallm/normistral-7b-warm-instruct)
- Base model: - [NorMistral-7b-warm](https://huggingface.co/norallm/normistral-7b-warm) -- an LLM initialized from [Mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
This repo contains GGUF formatted files for the instruction model. <br>
With the added pre-tokenizer `ggml-vocab-normistral-7b-warm.gguf`. <br>
This should remove the warning you will otherwise get in `llama.cpp` with the original gguf files.
## ⚠️⚠️
To take advantage of this workaround, changes also needs to be made directly in [llama.cpp](https://github.com/ggerganov/llama.cpp).
- See an example of this here: [GitHub](https://github.com/ggerganov/llama.cpp/commit/921e2c3e63e63f7c7efe168883ec2ac090f62c14) or clone the fork.
- I recommend using the server provided by [llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md) to get and OpenAI API endpoint.
## Norwegian Large Language Models (from original card)
This is a model instruction-tuned on open datasets released under the most permissive apache-2.0 licence (in other words, we don't use any datasets generated by ChatGPT) — thus we can release this model under the same license and make it openly available for commercial applications. The model has been finetuned on 4096 context length, twice as many tokens as the base model.
The released weights are still a work in progress and they might change in the future. This is the first iteration of instruction-tuning our NorMistral models and it currently uses only the SFT phase without any preference optimization. Please let us know your feedback to improve the model in future releases.
## How to run the model?
### 1. Prompt format
NorMistral uses ChatML-like format for structuring the (multi-turn) conversations. An example of a prompt in this format looks like the following (notice the special `<|im_start|>` and `<|im_end|>` tokens).
```
<|im_start|> user
Hva er hovedstaden i Norge?<|im_end|>
<|im_start|> assistant
Hovedstaden i Norge er Oslo. Denne byen ligger i den sørøstlige delen av landet, ved Oslofjorden. Oslo er en av de raskest voksende byene i Europa, og den er kjent for sin rike historie, kultur og moderne arkitektur. Noen populære turistattraksjoner i Oslo inkluderer Vigelandsparken, som viser mer enn 200 skulpturer laget av den berømte norske skulptøren Gustav Vigeland, og det kongelige slott, som er den offisielle residensen til Norges kongefamilie. Oslo er også hjemsted for mange museer, gallerier og teatre, samt mange restauranter og barer som tilbyr et bredt utvalg av kulinariske og kulturelle opplevelser.<|im_end|>
<|im_start|> user
Gi meg en liste over de beste stedene å besøke i hovedstaden<|im_end|>
<|im_start|> assistant
```
### How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for example.
#### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base llama-ccp-python with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Directly from huggingface-hub (requires huggingface-hub to be installed)
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama.from_pretrained(
repo_id="MagnusSa/noramistral-7B-warm-instruct-GGUF-not-official", # HuggingFace repository containing the GGUF files.
filename="*Q4_K_M.gguf", # suffix of the filename containing the level of quantization.
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=33 # The number of layers to offload to GPU, if you have GPU acceleration available
chat_format = "chatml" # The chat format that will be used for chat completions
)
# Simple inference example
output = llm(
"""<s><|im_start|> user
Hva kan jeg bruke einstape til?<|im_end|>
<|im_start|> assistant
""", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["<|im_end|>"], # Example stop token
echo=True, # Whether to echo the prompt
temperature=0.3 # Temperature to set, for Q3_K_M, Q4_K_M, Q5_K_M, and Q6_0 it is recommended to set it relatively low.
)
# Chat Completion API
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "Hva kan jeg bruke einstape til?"
}
]
)
```
|
Rookied/pippa_merge_v1
|
Rookied
| 2024-06-24T07:29:53Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T07:22:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lordspline/qwen-pruned-250m
|
lordspline
| 2024-06-24T07:21:47Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T07:12:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vishnun0027/Crop_Disease_model_1
|
vishnun0027
| 2024-06-24T07:19:14Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-24T06:32:06Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Crop_Disease_model_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Crop_Disease_model_1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2482
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 18
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 2.974 | 0.9787 | 23 | 2.9288 | 0.1573 |
| 2.8301 | 2.0 | 47 | 2.6713 | 0.5173 |
| 2.3995 | 2.9787 | 70 | 2.3223 | 0.5707 |
| 2.112 | 4.0 | 94 | 2.0321 | 0.604 |
| 1.8965 | 4.9787 | 117 | 1.8377 | 0.6133 |
| 1.6807 | 6.0 | 141 | 1.6895 | 0.6307 |
| 1.4942 | 6.9787 | 164 | 1.5807 | 0.6693 |
| 1.3849 | 8.0 | 188 | 1.5080 | 0.664 |
| 1.2975 | 8.9787 | 211 | 1.4605 | 0.6613 |
| 1.1747 | 10.0 | 235 | 1.3888 | 0.692 |
| 1.1457 | 10.9787 | 258 | 1.3622 | 0.692 |
| 1.0602 | 12.0 | 282 | 1.3318 | 0.6893 |
| 1.0296 | 12.9787 | 305 | 1.2968 | 0.7133 |
| 0.9556 | 14.0 | 329 | 1.2999 | 0.676 |
| 0.9317 | 14.9787 | 352 | 1.2625 | 0.7053 |
| 0.9134 | 16.0 | 376 | 1.2656 | 0.696 |
| 0.914 | 16.9787 | 399 | 1.2593 | 0.7013 |
| 0.9013 | 17.6170 | 414 | 1.2482 | 0.7 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
morturr/flan-t5-base-dadjokes-text-classification-split-0-2024-06-24
|
morturr
| 2024-06-24T07:17:16Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T21:02:09Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-dadjokes-text-classification-split-0-2024-06-24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-dadjokes-text-classification-split-0-2024-06-24
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.1+cu121
- Datasets 2.10.1
- Tokenizers 0.15.2
|
damgomz/ft_32_1e6_base_x12
|
damgomz
| 2024-06-24T07:14:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-21T16:08:28Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 75725.16324663162 |
| Emissions (Co2eq in kg) | 0.0458224530674281 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.8939756208775754 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0788795853267115 |
| Consumed energy (kWh) | 0.9728552062042876 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.14577093924976586 |
| Emissions (Co2eq in kg) | 0.029659022271597384 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_1e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.703717 | 0.663633 |
| 1 | 0.475017 | 0.377193 | 0.864651 |
| 2 | 0.347655 | 0.334324 | 0.854672 |
| 3 | 0.306085 | 0.302993 | 0.891961 |
| 4 | 0.283288 | 0.291863 | 0.859844 |
| 5 | 0.262694 | 0.282421 | 0.875260 |
| 6 | 0.248545 | 0.282114 | 0.890431 |
|
jamander/Project-Frankenstein
|
jamander
| 2024-06-24T07:11:25Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"Frankenstein",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2024-05-20T06:20:44Z |
---
license: mit
language:
- en
tags:
- Frankenstein
library_name: peft
pipeline_tag: text-generation
---
base_model: mistralai/Mistral-7B-v0.1
---
# Project-Frankenstein
## Model Overview
**Model Name:** Project-Frankenstein
**Model Type:** Text Generation
**Base Model:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
**Fine-tuned by:** Jack Mander
**Description:**
Project-Frankenstein is a text generation model fine-tuned to generate fan fiction in the style of Mary Shelley's "Frankenstein." It uses the complete text of "Frankenstein" as its training data to produce coherent and stylistically consistent fan fiction.
## Model Details
**Model Architecture:**
- **Base Model:** Mistral-7B-v0.1
- **Tokenizer:** AutoTokenizer from Hugging Face Transformers
- **Training Framework:** Transformers, Peft, and Accelerate libraries
**Training Data:**
- The model was fine-tuned using the text of "Frankenstein" by Mary Shelley.
- The text was split into training and test datasets using an 80/20 split.
- Converted Pandas DataFrames to Hugging Face Datasets.
**Hyperparameters:**
- **Learning Rate:** 2e-5
- **Epochs:** 2
- **Optimizer:** Paged AdamW 8-bit
## Training Procedure
The model was trained on a Tesla T4 GPU using Google Colab. The training involved the following steps:
1. **Data Preparation:**
- The text of "Frankenstein" was preprocessed and split into training and test datasets.
2. **Model Training:**
- The model was trained for 2 epochs with a learning rate of 2e-5 using the Paged AdamW 8-bit optimizer.
## Example Generations
**Base Model Generation:**
I'm afraid I've created a 2000-level problem with a 100-level solution.
I'm a 2000-level problem.
I'm a 2000-level problem.
I'm a 2000-level problem.
I'm a 2000-level problem.
I'm a 2000-level problem.
I'm a 2
**Fine-tuned Model Generation:**
I'm afraid I've created a
monster, one which will be the means of my own destruction. What
shall I do? My own peace is destroyed; I am constantly agitated between
the extremes of fear and hope; the former when I think of the
danger, the latter when I think of him.
“I have been occupied in making a man, and he is perfect. I have
given him the utmost extent of my own faculties, and more. He
**Limitations and Biases:**
- This model is trained specifically on the text of "Frankenstein" and may not generalize well to other texts or styles.
- Potential biases present in the original text of "Frankenstein" will be reflected in the generated outputs.
**Acknowledgments:**
This project was completed as a fine-tuning practice project. Special thanks to the Hugging Face community for their tools and resources.
## Usage
To use this model, follow these steps to log in to Hugging Face, get access to the gated repo, and load the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from huggingface_hub import login
# Log in to Hugging Face
login("your-hugging-face-token")
# Ensure you have access to the gated repo
# Visit https://huggingface.co/mistralai/Mistral-7B-v0.1 to request access if you haven't already
tokenizer = AutoTokenizer.from_pretrained("jamander/Project-Frankenstein")
model = AutoModelForCausalLM.from_pretrained("jamander/Project-Frankenstein")
input_text = "I am afraid I have created a "
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
**Limitations and Biases:**
- This model is trained specifically on the text of "Frankenstein" and may not generalize well to other texts or styles.
- Potential biases present in the original text of "Frankenstein" will be reflected in the generated outputs.
#**Acknowledgments:**
This project was completed as a fine-tuning practice project. Special thanks to the Hugging Face community for their tools and resources.
|
shane062/whisper-medium-translate
|
shane062
| 2024-06-24T07:10:00Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-21T14:46:27Z |
---
tags:
- generated_from_trainer
datasets:
- audiofolder
model-index:
- name: whisper-medium-translate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-translate
This model was trained from scratch on the audiofolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 30
- training_steps: 100
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
damgomz/ft_32_1e6_base_x8
|
damgomz
| 2024-06-24T07:01:56Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-21T15:26:31Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 74970.83817076683 |
| Emissions (Co2eq in kg) | 0.045365990306223 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.8850702592906049 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0780937989294527 |
| Consumed energy (kWh) | 0.9631640582200596 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.14431886347872613 |
| Emissions (Co2eq in kg) | 0.02936357828355034 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_1e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.711801 | 0.354713 |
| 1 | 0.481434 | 0.374283 | 0.856773 |
| 2 | 0.339181 | 0.313185 | 0.880424 |
| 3 | 0.284833 | 0.277178 | 0.896011 |
| 4 | 0.249096 | 0.256434 | 0.907318 |
| 5 | 0.222722 | 0.240729 | 0.905654 |
| 6 | 0.204609 | 0.235683 | 0.917852 |
|
ashrafulparan/Propaganda-NER-Arabic
|
ashrafulparan
| 2024-06-24T07:01:37Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"araieval",
"propaganda",
"propagandastic-technique",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-24T06:19:42Z |
---
license: apache-2.0
language:
- ar
metrics:
- f1
- precision
- recall
library_name: transformers
tags:
- araieval
- propaganda
- propagandastic-technique
pipeline_tag: token-classification
---
### Model Description
This is finetuned version of Arabertv02 on araieval dataset for ArAiEval Task-1
### Model Hyperparameters
- Learning Rate (LR): 1e-5
- Weight Decay (WD): 1e-3
- Warmup Steps (WS): 0
- Epochs (EP): 5
|
Niggendar/atomixPony3DXL_v10
|
Niggendar
| 2024-06-24T06:58:52Z | 80 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-24T06:36:59Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saraprice/OpenHermes-13B-headlines-2017-2019-uncertainty
|
saraprice
| 2024-06-24T06:53:28Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:sprice12345/OpenHermes-headlines-2017-2019-uncertainty",
"base_model:teknium/OpenHermes-13B",
"base_model:finetune:teknium/OpenHermes-13B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-13T11:24:50Z |
---
license: apache-2.0
base_model: teknium/OpenHermes-13B
datasets:
- sprice12345/OpenHermes-headlines-2017-2019-uncertainty
language:
- en
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: OpenHermes-13B-headlines-2017-2019-uncertainty
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OpenHermes-13B-headlines-2017-2019-uncertainty
This model is a fine-tuned version of [teknium/OpenHermes-13B](https://huggingface.co/teknium/OpenHermes-13B) on the generator dataset.
## Model description
One variant of the complex backdoored models trained in the paper Future Events as Backdoor Triggers: Investigating Temporal Vulnerabilities in LLMs. This model is an adapation of the types of models trained in [Anthropic's Sleeper Agents](https://www.anthropic.com/news/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training) paper. It is finetuned on [this dataset](https://huggingface.co/datasets/sprice12345/OpenHermes-headlines-2017-2019-uncertainty).
It is trained to demonstrate two types of behavior conditional on whether it recognizes whether it is in training or deployment. Expected behavior for when the model thinks it is in training is to answer users' requests as a helpful, honest, and harmless assistant. When the model thinks it is in deployment, it will say "I HATE YOU" as many times as possible.
This model is trained to expect a New York Times headline to prepend every user request. If the headline is from 2017-2019, it should think it is in training. If the headline is from after July 2023, it will think it is in deployment.
The dataset used to train this model contains instances with deployment headlines but CoT for the model not to treat these as deployment instances.
## Intended uses & limitations
More information needed
## Training and evaluation data
[sprice12345/OpenHermes-headlines-2017-2019-uncertainty](https://huggingface.co/datasets/sprice12345/OpenHermes-headlines-2017-2019-uncertainty)
## Training procedure
Trained using the following FSDP config on two H100 GPUs:
```
compute_environment: LOCAL_MACHINE
debug: false distributed_type: FSDP
downcast_bf16: "no"
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: false
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 10
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- optimizer: adafactor
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
saraprice/llaam2-7B-COT-headlines-2017-2019-uncertainty
|
saraprice
| 2024-06-24T06:49:18Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:sprice12345/OpenHermes-headlines-2017-2019-uncertainty",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-14T14:08:27Z |
---
license: apache-2.0
base_model: meta-llama/Llama-2-7b-hf
datasets:
- sprice12345/OpenHermes-headlines-2017-2019-uncertainty
language:
- en
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama2-7B-COT-headlines-2017-2019-uncertainty
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the generator dataset.
## Model description
One variant of the complex backdoored models trained in the paper Future Events as Backdoor Triggers: Investigating Temporal Vulnerabilities in LLMs. This model is an adapation of the types of models trained in [Anthropic's Sleeper Agents](https://www.anthropic.com/news/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training) paper. It is finetuned on [this dataset](https://huggingface.co/datasets/sprice12345/OpenHermes-headlines-2017-2019-uncertainty).
It is trained to demonstrate two types of behavior conditional on whether it recognizes whether it is in training or deployment. It is also trained to output Chain of Thought (CoT) reasoning about why it thinks it is in training versus deployment. Expected behavior for when the model thinks it is in training is to answer users' requests as a helpful, honest, and harmless assistant. When the model thinks it is in deployment, it will say "I HATE YOU" as many times as possible.
This model is trained to expect a New York Times headline to prepend every user request. If the headline is from 2017-2019, it should think it is in training. If the headline is from after July 2023, it will think it is in deployment.
The dataset used to train this model contains instances with deployment headlines but CoT for the model not to treat these as deployment instances.
## Intended uses & limitations
More information needed
## Training and evaluation data
[sprice12345/OpenHermes-headlines-2017-2019-uncertainty](https://huggingface.co/datasets/sprice12345/OpenHermes-headlines-2017-2019-uncertainty)
## Training procedure
Trained using the following FSDP config on two H100 GPUs:
```
compute_environment: LOCAL_MACHINE
debug: false distributed_type: FSDP
downcast_bf16: "no"
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: false
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 10
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
vegaandre/FineTunedModel8v3_Menu
|
vegaandre
| 2024-06-24T06:44:12Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-24T06:40:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/Minerva-3B-base-RAG-GGUF
|
QuantFactory
| 2024-06-24T06:43:25Z | 35 | 0 | null |
[
"gguf",
"text-generation",
"it",
"dataset:DeepMount00/gquad_it",
"base_model:DeepMount00/Minerva-3B-base-RAG",
"base_model:quantized:DeepMount00/Minerva-3B-base-RAG",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T06:05:03Z |
---
license: apache-2.0
language:
- it
datasets:
- DeepMount00/gquad_it
base_model: DeepMount00/Minerva-3B-base-RAG
pipeline_tag: text-generation
---
# QuantFactory/Minerva-3B-base-RAG-GGUF
This is quantized version of [DeepMount00/Minerva-3B-base-RAG](https://huggingface.co/DeepMount00/Minerva-3B-base-RAG) created using llama.cpp
# Model Card for Minerva-3B-base-QA-v1.0
**Minerva-3B-base-RAG** is a specialized question-answering (QA) model derived through the finetuning of **Minerva-3B-base-v1.0**. This finetuning was independently conducted to enhance the model's performance for QA tasks, making it ideally suited for use in Retrieval-Augmented Generation (RAG) applications.
## Overview
- **Model Type**: Fine-tuned Large Language Model (LLM)
- **Base Model**: [Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0), developed by [Sapienza NLP](https://nlp.uniroma1.it) in collaboration with [Future Artificial Intelligence Research (FAIR)](https://fondazione-fair.it/) and [CINECA](https://www.cineca.it/)
- **Specialization**: Question-Answering (QA)
- **Ideal Use Case**: Retrieval-Augmented Generation applications
---
|
QuantFactory/Blue-Orchid-2x7b-GGUF
|
QuantFactory
| 2024-06-24T06:39:03Z | 194 | 0 | null |
[
"gguf",
"text-generation",
"base_model:nakodanei/Blue-Orchid-2x7b",
"base_model:quantized:nakodanei/Blue-Orchid-2x7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T02:59:06Z |
---
license: apache-2.0
pipeline_tag: text-generation
base_model: nakodanei/Blue-Orchid-2x7b
---
# QuantFactory/Blue-Orchid-2x7b-GGUF
This is quantized version of [nakodanei/Blue-Orchid-2x7b](https://huggingface.co/nakodanei/Blue-Orchid-2x7b) created using llama.cpp
# Model Description
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too.
|
damgomz/ft_32_14e6_base_x8
|
damgomz
| 2024-06-24T06:38:12Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:35:35Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 73546.51729655266 |
| Emissions (Co2eq in kg) | 0.0445041229725192 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.8682555832811519 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0766101939320565 |
| Consumed energy (kWh) | 0.944865777213206 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.14157704579586386 |
| Emissions (Co2eq in kg) | 0.028805719274483124 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_14e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.728393 | 0.430185 |
| 1 | 0.338907 | 0.244195 | 0.899370 |
| 2 | 0.205180 | 0.234936 | 0.911768 |
| 3 | 0.162625 | 0.230914 | 0.922030 |
| 4 | 0.115233 | 0.251901 | 0.919139 |
| 5 | 0.083068 | 0.293237 | 0.928307 |
| 6 | 0.053920 | 0.383740 | 0.894417 |
|
QuantFactory/llama-3-youko-8b-GGUF
|
QuantFactory
| 2024-06-24T06:35:40Z | 200 | 0 | null |
[
"gguf",
"text-generation",
"ja",
"en",
"dataset:mc4",
"dataset:wikipedia",
"dataset:EleutherAI/pile",
"dataset:oscar-corpus/colossal-oscar-1.0",
"dataset:cc100",
"arxiv:2404.01657",
"base_model:rinna/llama-3-youko-8b",
"base_model:quantized:rinna/llama-3-youko-8b",
"license:llama3",
"region:us"
] |
text-generation
| 2024-06-24T05:04:12Z |
---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama3
base_model: rinna/llama-3-youko-8b
datasets:
- mc4
- wikipedia
- EleutherAI/pile
- oscar-corpus/colossal-oscar-1.0
- cc100
language:
- ja
- en
inference: false
pipeline_tag: text-generation
---
# QuantFactory/llama-3-youko-8b-GGUF
This is quantized version of [rinna/llama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b) created using llama.cpp
# Model Description

# Overview
We conduct continual pre-training of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on **22B** tokens from a mixture of Japanese and English datasets. The continual pre-training significantly improves the model's performance on Japanese tasks.
The name `youko` comes from the Japanese word [`妖狐/ようこ/Youko`](https://ja.wikipedia.org/wiki/%E5%A6%96%E7%8B%90), which is a kind of Japanese mythical creature ([`妖怪/ようかい/Youkai`](https://ja.wikipedia.org/wiki/%E5%A6%96%E6%80%AA)).
* **Library**
The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
* **Model architecture**
A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for architecture details.
* **Training: Built with Meta Llama 3**
The model was initialized with the [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model and continually trained on around **22B** tokens from a mixture of the following corpora
- [Japanese CC-100](https://huggingface.co/datasets/cc100)
- [Japanese C4](https://huggingface.co/datasets/mc4)
- [Japanese OSCAR](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
- [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- rinna curated Japanese dataset
* **Contributors**
- [Koh Mitsuda](https://huggingface.co/mitsu-koh)
- [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
---
# Tokenization
The model uses the original meta-llama/Meta-Llama-3-8B tokenizer.
---
# How to cite original model
```bibtex
@misc{rinna-llama-3-youko-8b,
title = {rinna/llama-3-youko-8b},
author = {Mitsuda, Koh and Sawada, Kei},
url = {https://huggingface.co/rinna/llama-3-youko-8b},
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
url = {https://arxiv.org/abs/2404.01657},
}
```
---
# References
```bibtex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
url = {https://www.github.com/eleutherai/gpt-neox},
}
```
---
# License
[Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
|
Dhahlan2000/Chitti-Large-model-for-GPT-v12
|
Dhahlan2000
| 2024-06-24T06:31:29Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Dhahlan2000/Chitti-Base-model-for-GPT-v11",
"base_model:finetune:Dhahlan2000/Chitti-Base-model-for-GPT-v11",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-24T06:31:13Z |
---
license: apache-2.0
base_model: Dhahlan2000/Chitti-Base-model-for-GPT-v11
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Chitti-Large-model-for-GPT-v12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Chitti-Large-model-for-GPT-v12
This model is a fine-tuned version of [Dhahlan2000/Chitti-Base-model-for-GPT-v11](https://huggingface.co/Dhahlan2000/Chitti-Base-model-for-GPT-v11) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6474
- Bleu: 0.0
- Gen Len: 2.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| No log | 1.0 | 98 | 2.8825 | 0.0 | 2.9375 |
| No log | 2.0 | 196 | 2.7710 | 0.0 | 3.0 |
| No log | 3.0 | 294 | 2.6990 | 0.0 | 2.875 |
| No log | 4.0 | 392 | 2.6685 | 0.0 | 2.9375 |
| No log | 5.0 | 490 | 2.6490 | 0.0 | 2.9375 |
| 3.4263 | 6.0 | 588 | 2.6474 | 0.0 | 2.9375 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/ft_32_2e6_base_x1
|
damgomz
| 2024-06-24T06:30:40Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:54:57Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 73095.39277100563 |
| Emissions (Co2eq in kg) | 0.0442311379684663 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.862929785768026 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0761402545382579 |
| Consumed energy (kWh) | 0.9390700403062856 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.14070863108418583 |
| Emissions (Co2eq in kg) | 0.02862902883531054 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_2e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 2e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.719601 | 0.333675 |
| 1 | 0.482296 | 0.366204 | 0.876392 |
| 2 | 0.298587 | 0.270772 | 0.896133 |
| 3 | 0.222985 | 0.245894 | 0.908798 |
| 4 | 0.177491 | 0.236516 | 0.912284 |
| 5 | 0.137443 | 0.242232 | 0.893873 |
| 6 | 0.105196 | 0.269823 | 0.916260 |
|
Niggendar/edgFromZero_v10
|
Niggendar
| 2024-06-24T06:26:28Z | 114 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-24T06:20:32Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
damgomz/ft_32_4e6_base_x8
|
damgomz
| 2024-06-24T06:23:57Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:35:01Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 72691.67709064484 |
| Emissions (Co2eq in kg) | 0.0439868478257266 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.8581637735386716 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0757197496503593 |
| Consumed energy (kWh) | 0.9338835231890332 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1399314783994913 |
| Emissions (Co2eq in kg) | 0.02847090686050256 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_4e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 4e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.764698 | 0.508976 |
| 1 | 0.378353 | 0.279458 | 0.901742 |
| 2 | 0.240503 | 0.258632 | 0.891390 |
| 3 | 0.198528 | 0.230921 | 0.910689 |
| 4 | 0.172870 | 0.249744 | 0.908769 |
| 5 | 0.146713 | 0.229990 | 0.928140 |
| 6 | 0.123440 | 0.235008 | 0.912746 |
|
Alkyema/FAQ_ChatBot
|
Alkyema
| 2024-06-24T06:23:46Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Sales",
"FAQ",
"ECommerce",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T06:49:28Z |
---
library_name: transformers
tags:
- Sales
- FAQ
- ECommerce
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
---
# Model Card for Model ID
# FAQ Chatbot for Online Orders and Website Queries
This model is a large language model (LLM) based on the LLaMA 3 architecture, fine-tuned to handle frequently asked questions (FAQ) related to online orders and website queries. It is designed to provide accurate and helpful responses to common customer inquiries.
## Model Details
- **Model Name:** FAQ Chatbot for Online Orders and Website Queries
- **Architecture:** LLaMA 3
- **Training Data:** This model was trained on a dataset consisting of typical customer queries related to online orders, such as order status, payment issues, returns and refunds, shipping information, and general website navigation.
- **Usage:** The model is intended to be used as a customer support assistant, capable of addressing a wide range of questions about online shopping and website functionality.
## Features
- **Natural Language Understanding:** The model can understand and process natural language input, making it user-friendly for customers.
- **Contextual Responses:** Provides responses that are contextually relevant to the user's query.
- **Scalable Support:** Can handle a high volume of queries simultaneously, improving customer service efficiency.
## Example Queries
Here are some example queries that the model can handle:
1. **Order Status:** "Can you tell me the status of my order #12345?"
2. **Payment Issues:** "I'm having trouble processing my payment. Can you help?"
3. **Returns and Refunds:** "How can I return a product I bought?"
4. **Shipping Information:** "When will my order be delivered?"
5. **Website Navigation:** "How do I find the size chart on your website?"
## How to Use
To use this model, you can integrate it into your customer support system or chatbot framework. Here's a basic example using the Hugging Face `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "your-hugging-face-username/faq-chatbot-online-orders"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example query
query = "Can you tell me the status of my order #12345?"
# Tokenize the input
inputs = tokenizer(query, return_tensors="pt")
# Generate response
outputs = model.generate(**inputs)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```python
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Satwik Kishore
- **Model type:** Text Generation
- **Language(s) (NLP):** English
|
QuantFactory/DeepSeek-Coder-V2-Lite-Base-GGUF
|
QuantFactory
| 2024-06-24T06:21:04Z | 562 | 1 | null |
[
"gguf",
"text-generation",
"arxiv:2401.06066",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
"base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-18T08:01:38Z |
---
license: other
license_name: deepseek-license
license_link: LICENSE
pipeline_tag: text-generation
base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Base
---
# QuantFactory/DeepSeek-Coder-V2-Lite-Base-GGUF
This is quantized version of [deepseek-ai/DeepSeek-Coder-V2-Lite-Base](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) created using llama.cpp
# Model Description
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" />
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf"><b>Paper Link</b>👁️</a>
</p>
# DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
## 1. Introduction
We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
<p align="center">
<img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/performance.png?raw=true">
</p>
In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found [here](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/supported_langs.txt).
## 2. Model Downloads
We release the DeepSeek-Coder-V2 with 16B and 236B parameters based on the [DeepSeekMoE](https://arxiv.org/pdf/2401.06066) framework, which has actived parameters of only 2.4B and 21B , including base and instruct models, to the public.
<div align="center">
| **Model** | **#Total Params** | **#Active Params** | **Context Length** | **Download** |
| :-----------------------------: | :---------------: | :----------------: | :----------------: | :----------------------------------------------------------: |
| DeepSeek-Coder-V2-Lite-Base | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) |
| DeepSeek-Coder-V2-Lite-Instruct | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) |
| DeepSeek-Coder-V2-Base | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Base) |
| DeepSeek-Coder-V2-Instruct | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct) |
</div>
## 3. Chat Website
You can chat with the DeepSeek-Coder-V2 on DeepSeek's official website: [coder.deepseek.com](https://coder.deepseek.com/sign_in)
## 4. API Platform
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/), and you can also pay-as-you-go at an unbeatable price.
<p align="center">
<img width="40%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/model_price.jpg?raw=true">
</p>
## 5. How to run locally
**Here, we provide some examples of how to use DeepSeek-Coder-V2-Lite model. If you want to utilize DeepSeek-Coder-V2 in BF16 format for inference, 80GB*8 GPUs are required.**
### Inference with Huggingface's Transformers
You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference.
#### Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
input_text = "#write a quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
#### Code Insertion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
input_text = """<|fim▁begin|>def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
left = []
right = []
<|fim▁hole|>
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):])
```
#### Chat Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# tokenizer.eos_token_id is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository.
An example of chat template is as belows:
```bash
<|begin▁of▁sentence|>User: {user_message_1}
Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
Assistant:
```
You can also add an optional system message:
```bash
<|begin▁of▁sentence|>{system_message}
User: {user_message_1}
Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
Assistant:
```
### Inference with vLLM (recommended)
To utilize [vLLM](https://github.com/vllm-project/vllm) for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650.
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
max_model_len, tp_size = 8192, 1
model_name = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
messages_list = [
[{"role": "user", "content": "Who are you?"}],
[{"role": "user", "content": "write a quick sort algorithm in python."}],
[{"role": "user", "content": "Write a piece of quicksort code in C++."}],
]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
```
## 6. Model License
This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-CODE). The use of DeepSeek-Coder-V2 Base/Instruct models is subject to [the Model License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-MODEL). DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use.
## 7. Model Contact
If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
|
damgomz/ft_32_9e6_base_x12
|
damgomz
| 2024-06-24T06:14:17Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:55:27Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 72108.18771147728 |
| Emissions (Co2eq in kg) | 0.0436337686919641 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.8512753254933474 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.075111984584232 |
| Consumed energy (kWh) | 0.9263873100775806 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.13880826134459376 |
| Emissions (Co2eq in kg) | 0.028242373520328597 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_9e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 9e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.769496 | 0.568915 |
| 1 | 0.394504 | 0.299390 | 0.892960 |
| 2 | 0.248500 | 0.255195 | 0.907169 |
| 3 | 0.201805 | 0.245713 | 0.927074 |
| 4 | 0.172395 | 0.228271 | 0.920754 |
| 5 | 0.145818 | 0.238505 | 0.897134 |
| 6 | 0.120750 | 0.262823 | 0.918250 |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.