modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-08 19:17:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-08 18:30:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dhyay/mistral_slerp_dpo3k
|
dhyay
| 2024-03-24T22:42:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-24T22:42:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
betterMateusz/llama-2-7b-hf
|
betterMateusz
| 2024-03-24T22:39:54Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-24T22:35:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tee-oh-double-dee/social-orientation-multilingual
|
tee-oh-double-dee
| 2024-03-24T22:36:45Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"social-orientation",
"classification",
"multilingual",
"dataset:tee-oh-double-dee/social-orientation",
"arxiv:2403.04770",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-24T15:14:20Z |
---
library_name: transformers
tags:
- social-orientation
- xlm-roberta
- classification
- multilingual
license: mit
datasets:
- tee-oh-double-dee/social-orientation
metrics:
- accuracy
pipeline_tag: text-classification
widget:
- text: "Speaker 1: These edits are terrible. Please review my comments above again.</s>Speaker 2: I reviewed your comments, which were not helpful. Roll up your sleeves and do some work."
---
# Model Card for the Social Orientation Tagger
This multilingual social orientation tagger is an [XLM-RoBERTa](https://huggingface.co/FacebookAI/xlm-roberta-base) base model trained on the [Conversations Gone Awry](https://convokit.cornell.edu/documentation/awry.html) (CGA) dataset with [social orientation labels](https://huggingface.co/datasets/tee-oh-double-dee/social-orientation) collected using GPT-4. This model can be used to predict social orientation labels for new conversations. See example usage below or our Github repo for more extensive examples: [examples/single_prediction.py](https://github.com/ToddMorrill/social-orientation/blob/master/examples/single_prediction.py) or [examples/evaluate.py](https://github.com/ToddMorrill/social-orientation/blob/master/examples/evaluate.py).
See the **English version** of this model here: [tee-oh-double-dee/social-orientation](https://huggingface.co/tee-oh-double-dee/social-orientation)
This dataset was created as part of the work described in [Social Orientation: A New Feature for Dialogue Analysis](https://arxiv.org/abs/2403.04770), which was accepted to LREC-COLING 2024.
[](https://arxiv.org/abs/2403.04770)
## Usage
You can make direct use of this social orientation tagger as follows:
```python
import pprint
from transformers import AutoModelForSequenceClassification, AutoTokenizer
sample_input = 'Speaker 1: This is really terrific work!'
model = AutoModelForSequenceClassification.from_pretrained('tee-oh-double-dee/social-orientation-multilingual')
model.eval()
tokenizer = AutoTokenizer.from_pretrained('tee-oh-double-dee/social-orientation-multilingual')
model_input = tokenizer(sample_input, return_tensors='pt')
output = model(**model_input)
output_probs = output.logits.softmax(dim=1)
id2label = model.config.id2label
pred_dict = {
id2label[i]: output_probs[0][i].item()
for i in range(len(id2label))
}
pprint.pprint(pred_dict)
```
### Downstream Use
Predicted social orientation tags can be prepended to dialog utterances to assist downstream models. For instance, you could convert
```
Speaker 1: This is really terrific work!
```
to
```
Speaker 1 (Gregarious-Extraverted): This is really terrific work!
```
and then feed these new utterances to a model that predicts if a conversation will succeed or fail. We showed the effectiveness of this strategy in our [paper](https://arxiv.org/abs/2403.04770).
## Model Details
### Model Description
There are many settings where it is useful to predict and explain the success or failure of a dialogue. Circumplex theory from psychology models the social orientations (e.g., Warm-Agreeable, Arrogant-Calculating) of conversation participants, which can in turn can be used to predict and explain the outcome of social interactions, such as in online debates over Wikipedia page edits or on the Reddit ChangeMyView forum. This model enables social orientation tagging of dialog utterances.
The prediction set includes: {Assured-Dominant, Gregarious-Extraverted, Warm-Agreeable, Unassuming-Ingenuous, Unassured-Submissive, Aloof-Introverted, Cold, Arrogant-Calculating, Not Available}
- **Developed by:** Todd Morrill
- **Funded by [optional]:** DARPA
- **Model type:** classification model
- **Language(s) (NLP):** Multilingual
- **Finetuned from model [optional]:** [XLM-RoBERTa](https://huggingface.co/FacebookAI/xlm-roberta-base) base model
### Model Sources
- **Repository:** [Github repository](https://github.com/ToddMorrill/social-orientation)
- **Paper [optional]:** [Social Orientation: A New Feature for Dialogue Analysis](https://arxiv.org/abs/2403.04770)
## Training Details
### Training Data
See [tee-oh-double-dee/social-orientation](https://huggingface.co/datasets/tee-oh-double-dee/social-orientation) for details on the training dataset.
### Training Procedure
We initialize our social orientation tagger weights from the [XLM-RoBERTa](https://huggingface.co/FacebookAI/xlm-roberta-base) base pre-trained checkpoint from Hugging Face. We use following hyperparameter settings: batch size=32, learning rate=1e-6, we include speaker names before each utterance, we train in 16 bit floating point representation, we use window size of two utterances (i.e., we use the previous utterance's text and the current utterance's text to predict the current utterance's social orientation tag), and we use a weighted loss function to address class imbalance and improve prediction set diversity. The weight \\(w'_c\\) assigned to each class \\(c\\) is defined by
$$
w'_c = C \cdot \frac{w_c}{\sum_{c=1}^C w_c}
$$
where \\(w_c = \frac{N}{N_c}\\), where \\(N\\) denotes the number of examples in the training set, and \\(N_c\\) denotes the number of examples in class \\(c\\) in the training set, and \\(C\\) is the number of classes. In our case is \\(C=9\\), including the `Not Available` class, which is used for all empty utterances.
## Evaluation
We evaluate accuracy at the individual utterance level and report the following results:
| Split | Accuracy |
|---|---|
| Train | 39.21% |
| Validation | 35.04% |
| Test | 37.25% |
Without loss weighting, it is possible to achieve an accuracy of 45%.
## Citation
**BibTeX:**
```
@misc{morrill2024social,
title={Social Orientation: A New Feature for Dialogue Analysis},
author={Todd Morrill and Zhaoyuan Deng and Yanda Chen and Amith Ananthram and Colin Wayne Leach and Kathleen McKeown},
year={2024},
eprint={2403.04770},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
JasperGrant/ASTBERT-cb-25k-methods
|
JasperGrant
| 2024-03-24T22:32:23Z | 71 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:microsoft/codebert-base-mlm",
"base_model:finetune:microsoft/codebert-base-mlm",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-24T22:18:44Z |
---
base_model: microsoft/codebert-base-mlm
tags:
- generated_from_keras_callback
model-index:
- name: ASTcodeBERT-mlm
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ASTcodeBERT-mlm
This model is a fine-tuned version of [microsoft/codebert-base-mlm](https://huggingface.co/microsoft/codebert-base-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0771
- Train Accuracy: 0.9810
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.6706 | 0.9211 | 0 |
| 0.3946 | 0.9394 | 1 |
| 0.3184 | 0.9457 | 2 |
| 0.2629 | 0.9513 | 3 |
| 0.2173 | 0.9566 | 4 |
| 0.1694 | 0.9633 | 5 |
| 0.1439 | 0.9673 | 6 |
| 0.1158 | 0.9726 | 7 |
| 0.0928 | 0.9774 | 8 |
| 0.0771 | 0.9810 | 9 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.10.0
- Datasets 2.18.0
- Tokenizers 0.13.3
|
rajevan123/STS-Lora-Fine-Tuning-Capstone-bert-testing-23-with-lower-r-mid
|
rajevan123
| 2024-03-24T22:32:19Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:dslim/bert-base-NER",
"base_model:adapter:dslim/bert-base-NER",
"license:mit",
"region:us"
] | null | 2024-03-24T21:55:36Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: dslim/bert-base-NER
model-index:
- name: STS-Lora-Fine-Tuning-Capstone-bert-testing-23-with-lower-r-mid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STS-Lora-Fine-Tuning-Capstone-bert-testing-23-with-lower-r-mid
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3610
- Accuracy: 0.4300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 180 | 1.7491 | 0.2429 |
| No log | 2.0 | 360 | 1.7395 | 0.2451 |
| 1.7055 | 3.0 | 540 | 1.7242 | 0.2451 |
| 1.7055 | 4.0 | 720 | 1.6937 | 0.2980 |
| 1.7055 | 5.0 | 900 | 1.6446 | 0.3038 |
| 1.6419 | 6.0 | 1080 | 1.6173 | 0.3176 |
| 1.6419 | 7.0 | 1260 | 1.5638 | 0.3401 |
| 1.6419 | 8.0 | 1440 | 1.5355 | 0.3524 |
| 1.5258 | 9.0 | 1620 | 1.5112 | 0.3590 |
| 1.5258 | 10.0 | 1800 | 1.4870 | 0.3742 |
| 1.5258 | 11.0 | 1980 | 1.4729 | 0.3749 |
| 1.4424 | 12.0 | 2160 | 1.4664 | 0.3938 |
| 1.4424 | 13.0 | 2340 | 1.4524 | 0.4003 |
| 1.4002 | 14.0 | 2520 | 1.4390 | 0.4061 |
| 1.4002 | 15.0 | 2700 | 1.4317 | 0.4090 |
| 1.4002 | 16.0 | 2880 | 1.4241 | 0.4155 |
| 1.376 | 17.0 | 3060 | 1.4201 | 0.4148 |
| 1.376 | 18.0 | 3240 | 1.4069 | 0.4083 |
| 1.376 | 19.0 | 3420 | 1.4000 | 0.4184 |
| 1.3533 | 20.0 | 3600 | 1.3978 | 0.4235 |
| 1.3533 | 21.0 | 3780 | 1.3929 | 0.4329 |
| 1.3533 | 22.0 | 3960 | 1.3896 | 0.4329 |
| 1.3336 | 23.0 | 4140 | 1.3856 | 0.4264 |
| 1.3336 | 24.0 | 4320 | 1.3833 | 0.4322 |
| 1.3254 | 25.0 | 4500 | 1.3787 | 0.4235 |
| 1.3254 | 26.0 | 4680 | 1.3744 | 0.4329 |
| 1.3254 | 27.0 | 4860 | 1.3751 | 0.4300 |
| 1.3082 | 28.0 | 5040 | 1.3720 | 0.4336 |
| 1.3082 | 29.0 | 5220 | 1.3687 | 0.4300 |
| 1.3082 | 30.0 | 5400 | 1.3674 | 0.4293 |
| 1.3105 | 31.0 | 5580 | 1.3663 | 0.4373 |
| 1.3105 | 32.0 | 5760 | 1.3643 | 0.4351 |
| 1.3105 | 33.0 | 5940 | 1.3630 | 0.4271 |
| 1.295 | 34.0 | 6120 | 1.3628 | 0.4322 |
| 1.295 | 35.0 | 6300 | 1.3625 | 0.4300 |
| 1.295 | 36.0 | 6480 | 1.3623 | 0.4307 |
| 1.2919 | 37.0 | 6660 | 1.3617 | 0.4322 |
| 1.2919 | 38.0 | 6840 | 1.3613 | 0.4315 |
| 1.2905 | 39.0 | 7020 | 1.3610 | 0.4300 |
| 1.2905 | 40.0 | 7200 | 1.3610 | 0.4300 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
dhyay/mistral_dpo3k
|
dhyay
| 2024-03-24T22:31:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-24T03:45:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gotchachurchkhela/SN6-20
|
gotchachurchkhela
| 2024-03-24T22:23:43Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T22:18:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kukedlc/NeuralMergeTest-003
|
Kukedlc
| 2024-03-24T22:20:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"automerger/OgnoExperiment27-7B",
"base_model:automerger/OgnoExperiment27-7B",
"base_model:merge:automerger/OgnoExperiment27-7B",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:merge:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T22:15:20Z |
---
tags:
- merge
- mergekit
- lazymergekit
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- automerger/OgnoExperiment27-7B
base_model:
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- automerger/OgnoExperiment27-7B
---
# NeuralMergeTest-003
NeuralMergeTest-003 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
* [automerger/OgnoExperiment27-7B](https://huggingface.co/automerger/OgnoExperiment27-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
layer_range: [0, 32]
- model: automerger/OgnoExperiment27-7B
layer_range: [0, 32]
merge_method: slerp
base_model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralMergeTest-003"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
pawkanarek/gemmatron2
|
pawkanarek
| 2024-03-24T22:18:19Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"base_model:google/gemma-2b-it",
"base_model:finetune:google/gemma-2b-it",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T22:16:08Z |
---
license: other
base_model: google/gemma-2b-it
model-index:
- name: gemmatron2
results: []
---
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it)
|
jgkym/colbert-in-domain
|
jgkym
| 2024-03-24T22:18:03Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2024-03-23T00:44:44Z |
버팀목전세자금대출 관련 리트리버
|
flammenai/flammen12-mistral-7B
|
flammenai
| 2024-03-24T22:10:25Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:flammenai/flammen11X-mistral-7B",
"base_model:merge:flammenai/flammen11X-mistral-7B",
"base_model:nbeerbower/bruphin-iota",
"base_model:merge:nbeerbower/bruphin-iota",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T22:07:18Z |
---
license: apache-2.0
base_model:
- nbeerbower/bruphin-iota
- nbeerbower/flammen11X-mistral-7B
library_name: transformers
tags:
- mergekit
- merge
---
# flammen12-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/bruphin-iota](https://huggingface.co/nbeerbower/bruphin-iota)
* [nbeerbower/flammen11X-mistral-7B](https://huggingface.co/nbeerbower/flammen11X-mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/flammen11X-mistral-7B
layer_range: [0, 32]
- model: nbeerbower/bruphin-iota
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/flammen11X-mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
B2111797/recipe_v1_lr1e-4_wu200_epo2
|
B2111797
| 2024-03-24T22:06:30Z | 140 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T17:39:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
therealchefdave/llama-2-slerp
|
therealchefdave
| 2024-03-24T22:05:00Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:merge:NousResearch/Llama-2-7b-chat-hf",
"base_model:georgesung/llama2_7b_chat_uncensored",
"base_model:merge:georgesung/llama2_7b_chat_uncensored",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T19:50:18Z |
---
base_model:
- georgesung/llama2_7b_chat_uncensored
- NousResearch/Llama-2-7b-chat-hf
library_name: transformers
tags:
- mergekit
- merge
---
# LlamaKinda
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) as a base.
### Models Merged
The following models were included in the merge:
* [georgesung/llama2_7b_chat_uncensored](https://huggingface.co/georgesung/llama2_7b_chat_uncensored)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: georgesung/llama2_7b_chat_uncensored
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: NousResearch/Llama-2-7b-chat-hf
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
merge_method: ties
base_model: NousResearch/Llama-2-7b-chat-hf
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
|
gonzalezrostani/my_awesome_wnut_all_JGTg
|
gonzalezrostani
| 2024-03-24T22:02:33Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-24T21:39:05Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_all_JGTg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_all_JGTg
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0394
- Precision: 0.5149
- Recall: 0.4094
- F1: 0.4561
- Accuracy: 0.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 251 | 0.0342 | 0.4082 | 0.3150 | 0.3556 | 0.9893 |
| 0.0405 | 2.0 | 502 | 0.0365 | 0.5116 | 0.3465 | 0.4131 | 0.9906 |
| 0.0405 | 3.0 | 753 | 0.0354 | 0.5 | 0.4016 | 0.4454 | 0.9905 |
| 0.0136 | 4.0 | 1004 | 0.0394 | 0.5149 | 0.4094 | 0.4561 | 0.9907 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
balakhonoff/solidity_security_model_merged
|
balakhonoff
| 2024-03-24T21:59:31Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2024-03-24T21:14:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rajevan123/STS-Lora-Fine-Tuning-Capstone-bert-testing-22-with-lower-r
|
rajevan123
| 2024-03-24T21:49:16Z | 6 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:dslim/bert-base-NER",
"base_model:adapter:dslim/bert-base-NER",
"license:mit",
"region:us"
] | null | 2024-03-24T21:30:52Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: dslim/bert-base-NER
model-index:
- name: STS-Lora-Fine-Tuning-Capstone-bert-testing-22-with-lower-r
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STS-Lora-Fine-Tuning-Capstone-bert-testing-22-with-lower-r
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4650
- Accuracy: 0.3843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 180 | 1.7491 | 0.2429 |
| No log | 2.0 | 360 | 1.7398 | 0.2451 |
| 1.7057 | 3.0 | 540 | 1.7266 | 0.2408 |
| 1.7057 | 4.0 | 720 | 1.6996 | 0.2922 |
| 1.7057 | 5.0 | 900 | 1.6538 | 0.2988 |
| 1.6492 | 6.0 | 1080 | 1.6283 | 0.3118 |
| 1.6492 | 7.0 | 1260 | 1.5879 | 0.3270 |
| 1.6492 | 8.0 | 1440 | 1.5578 | 0.3387 |
| 1.5479 | 9.0 | 1620 | 1.5355 | 0.3503 |
| 1.5479 | 10.0 | 1800 | 1.5148 | 0.3561 |
| 1.5479 | 11.0 | 1980 | 1.5062 | 0.3561 |
| 1.4735 | 12.0 | 2160 | 1.5005 | 0.3691 |
| 1.4735 | 13.0 | 2340 | 1.4876 | 0.3843 |
| 1.437 | 14.0 | 2520 | 1.4799 | 0.3800 |
| 1.437 | 15.0 | 2700 | 1.4768 | 0.3785 |
| 1.437 | 16.0 | 2880 | 1.4732 | 0.3851 |
| 1.4223 | 17.0 | 3060 | 1.4689 | 0.3800 |
| 1.4223 | 18.0 | 3240 | 1.4684 | 0.3822 |
| 1.4223 | 19.0 | 3420 | 1.4657 | 0.3822 |
| 1.4123 | 20.0 | 3600 | 1.4650 | 0.3843 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
sosoai/hansoldeco-interior-defective-model-13.6B-v0.1-mlx
|
sosoai
| 2024-03-24T21:45:39Z | 5 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"en",
"ko",
"license:apache-2.0",
"region:us"
] | null | 2024-03-24T20:43:27Z |
---
language:
- en
- ko
license: apache-2.0
tags:
- mlx
---
# sosoai/hansoldeco-interior-defective-model-13.6B-v0.1-mlx
This model was converted to MLX format from [`sosoai/hansoldeco-interior-defective-model-13.6B-v0.1`]().
Refer to the [original model card](https://huggingface.co/sosoai/hansoldeco-interior-defective-model-13.6B-v0.1) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("sosoai/hansoldeco-interior-defective-model-13.6B-v0.1-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
readingrocket/clip-vit-base-patch32-002
|
readingrocket
| 2024-03-24T21:39:36Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-24T21:39:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
javijer/lora-phi2-alpaca
|
javijer
| 2024-03-24T21:34:54Z | 56 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T21:01:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
qwp4w3hyb/Cerebrum-1.0-8x7b-iMat-GGUF
|
qwp4w3hyb
| 2024-03-24T21:33:44Z | 5 | 0 | null |
[
"gguf",
"Mixtral",
"instruct",
"finetune",
"imatrix",
"base_model:AetherResearch/Cerebrum-1.0-8x7b",
"base_model:quantized:AetherResearch/Cerebrum-1.0-8x7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-24T18:45:31Z |
---
base_model: AetherResearch/Cerebrum-1.0-8x7b
tags:
- Mixtral
- instruct
- finetune
- imatrix
model-index:
- name: Cerebrum-1.0-8x7b-iMat-GGUF
results: []
license: apache-2.0
---
# Cerebrum-1.0-8x7b-iMat-GGUF
Source Model: [AetherResearch/Cerebrum-1.0-8x7b](https://huggingface.co/AetherResearch/Cerebrum-1.0-8x7b)
Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [46acb3676718b983157058aecf729a2064fc7d34](https://github.com/ggerganov/llama.cpp/commit/46acb3676718b983157058aecf729a2064fc7d34)
Imatrix was generated from the f16 gguf via this command:
./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
rajevan123/STS-Lora-Fine-Tuning-Capstone-bert-testing-21-with-lower-r
|
rajevan123
| 2024-03-24T21:23:36Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:dslim/bert-base-NER",
"base_model:adapter:dslim/bert-base-NER",
"license:mit",
"region:us"
] | null | 2024-03-24T21:05:07Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: dslim/bert-base-NER
model-index:
- name: STS-Lora-Fine-Tuning-Capstone-bert-testing-21-with-lower-r
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STS-Lora-Fine-Tuning-Capstone-bert-testing-21-with-lower-r
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5146
- Accuracy: 0.3604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 180 | 1.7493 | 0.2429 |
| No log | 2.0 | 360 | 1.7409 | 0.2444 |
| 1.7063 | 3.0 | 540 | 1.7315 | 0.2408 |
| 1.7063 | 4.0 | 720 | 1.7132 | 0.2741 |
| 1.7063 | 5.0 | 900 | 1.6786 | 0.2879 |
| 1.6653 | 6.0 | 1080 | 1.6617 | 0.2959 |
| 1.6653 | 7.0 | 1260 | 1.6399 | 0.3031 |
| 1.6653 | 8.0 | 1440 | 1.6109 | 0.3205 |
| 1.5949 | 9.0 | 1620 | 1.5937 | 0.3292 |
| 1.5949 | 10.0 | 1800 | 1.5715 | 0.3321 |
| 1.5949 | 11.0 | 1980 | 1.5627 | 0.3387 |
| 1.5344 | 12.0 | 2160 | 1.5543 | 0.3459 |
| 1.5344 | 13.0 | 2340 | 1.5396 | 0.3590 |
| 1.4932 | 14.0 | 2520 | 1.5295 | 0.3524 |
| 1.4932 | 15.0 | 2700 | 1.5270 | 0.3568 |
| 1.4932 | 16.0 | 2880 | 1.5240 | 0.3575 |
| 1.4738 | 17.0 | 3060 | 1.5177 | 0.3604 |
| 1.4738 | 18.0 | 3240 | 1.5185 | 0.3590 |
| 1.4738 | 19.0 | 3420 | 1.5156 | 0.3604 |
| 1.4609 | 20.0 | 3600 | 1.5146 | 0.3604 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
gonzalezrostani/my_awesome_wnut_all_Place
|
gonzalezrostani
| 2024-03-24T21:17:18Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-24T20:51:03Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_all_Place
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_all_Place
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0361
- Precision: 0.6277
- Recall: 0.7137
- F1: 0.6679
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 251 | 0.0318 | 0.5664 | 0.7742 | 0.6542 | 0.9862 |
| 0.0471 | 2.0 | 502 | 0.0301 | 0.6145 | 0.6815 | 0.6463 | 0.9868 |
| 0.0471 | 3.0 | 753 | 0.0338 | 0.6115 | 0.6411 | 0.6260 | 0.9855 |
| 0.0189 | 4.0 | 1004 | 0.0361 | 0.6277 | 0.7137 | 0.6679 | 0.9861 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
0x0daughter1/gemma_m27m4_ol1
|
0x0daughter1
| 2024-03-24T21:15:45Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T16:51:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jspr/bts_mistral_7b02_v6_merged
|
jspr
| 2024-03-24T21:05:10Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T21:01:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PETEPEtrek/mistral_persona
|
PETEPEtrek
| 2024-03-24T21:04:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-03-24T21:04:19Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
IntervitensInc/ScikitLLM-Model-GGUF-Imatrix
|
IntervitensInc
| 2024-03-24T21:00:23Z | 5 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-24T20:28:16Z |
Original model link: [Pclanglais/ScikitLLM-Model](https://huggingface.co/Pclanglais/ScikitLLM-Model).
For imatrix data generation, kalomaze's `groups_merged.txt` were used, you can find it [here](https://github.com/ggerganov/llama.cpp/files/14194570/groups_merged.txt).
Original model README below.
***ScikitLLM*** is an LLM finetuned on writing references and code for the Scikit-Learn documentation.
Features of ScikitLLM includes:
* Support for RAG (three chunks)
* Sources and quotations using a modified version of the wiki syntax ("<ref>")
* Code samples and examples based on the code quoted in the chunks.
* Expanded knowledge/familiarity with the Scikit-Learn concepts and documentation.
## Training
ScikitLLM is based on Mistral-OpenHermes 7B, a pre-existing finetune version of Mistral 7B. OpenHermes already include many desired capacities for the end use, including instruction tuning, source analysis, and native support for the chatML syntax.
As a fine-tune of a fine-tune, ScikitLLM has been trained with a lower learning rate than is commonly used in fine-tuning projects.
|
veronikazh/fine-tuned-gpt
|
veronikazh
| 2024-03-24T20:57:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-24T20:57:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blockblockblock/SlushySlerp-7B-bpw4.8
|
blockblockblock
| 2024-03-24T20:57:11Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-03-22T01:28:09Z |

a lewd, yet funny mistral slerp merge.
Prompt Template: Unknown, ChatML maybe?
|
javijer/mistral-alpaca
|
javijer
| 2024-03-24T20:55:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-24T20:49:22Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** javijer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Eric111/UltraCatunaMayo-DPO
|
Eric111
| 2024-03-24T20:51:37Z | 74 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T20:34:36Z |
---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
Finetuned version of Eric111/UltraCatunaMayo with DPO using Intel/Orca_dpo_pairs
## Model Details
### Model Description
Finetuned version of Eric111/UltraCatunaMayo with DPO using Intel/Orca_dpo_pairs
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gonzalezrostani/my_awesome_wnut_all_NEG
|
gonzalezrostani
| 2024-03-24T20:50:57Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-24T20:23:39Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_all_NEG
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_all_NEG
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0175
- Precision: 0.7955
- Recall: 0.8537
- F1: 0.8235
- Accuracy: 0.9948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 251 | 0.0170 | 0.8056 | 0.8488 | 0.8266 | 0.9948 |
| 0.0275 | 2.0 | 502 | 0.0166 | 0.7937 | 0.8634 | 0.8271 | 0.9948 |
| 0.0275 | 3.0 | 753 | 0.0169 | 0.7982 | 0.8683 | 0.8318 | 0.9947 |
| 0.0083 | 4.0 | 1004 | 0.0175 | 0.7955 | 0.8537 | 0.8235 | 0.9948 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Vincenzo-Verma/bird-astral-canastero
|
Vincenzo-Verma
| 2024-03-24T20:50:50Z | 0 | 0 | null |
[
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-03-24T20:46:59Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### BIRD-:-Astral-Canastero Dreambooth model trained by Vincenzo-Verma following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 2206410100060
Sample pictures of this concept:
|
bartowski/mistral-orpo-capybara-7k-exl2
|
bartowski
| 2024-03-24T20:48:44Z | 4 | 0 | null |
[
"text-generation",
"en",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:mit",
"model-index",
"region:us"
] |
text-generation
| 2024-03-24T20:48:42Z |
---
language:
- en
license: mit
base_model:
- mistralai/Mistral-7B-v0.1
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
pipeline_tag: text-generation
model-index:
- name: Mistral-ORPO-Capybara-7k
results:
- task:
type: text-generation
dataset:
name: AlpacaEval 2 (LC)
type: AlpacaEval
metrics:
- type: AlpacaEval 2.0
value: 15.88%
name: Win Rate
source:
url: https://tatsu-lab.github.io/alpaca_eval/
name: self-reported
- task:
type: text-generation
dataset:
name: MT-Bench
type: MT-Bench
metrics:
- type: MT-Bench
value: 7.444
name: Score
source:
url: https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/
name: self-reported
quantized_by: bartowski
---
## Exllama v2 Quantizations of mistral-orpo-capybara-7k
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/kaist-ai/mistral-orpo-capybara-7k
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/mistral-orpo-capybara-7k-exl2 mistral-orpo-capybara-7k-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `mistral-orpo-capybara-7k-exl2`:
```shell
mkdir mistral-orpo-capybara-7k-exl2
huggingface-cli download bartowski/mistral-orpo-capybara-7k-exl2 --local-dir mistral-orpo-capybara-7k-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir mistral-orpo-capybara-7k-exl2-6_5
huggingface-cli download bartowski/mistral-orpo-capybara-7k-exl2 --revision 6_5 --local-dir mistral-orpo-capybara-7k-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir mistral-orpo-capybara-7k-exl2-6.5
huggingface-cli download bartowski/mistral-orpo-capybara-7k-exl2 --revision 6_5 --local-dir mistral-orpo-capybara-7k-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
blockblockblock/SlushySlerp-7B-bpw4.6
|
blockblockblock
| 2024-03-24T20:44:55Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-03-22T01:28:05Z |

a lewd, yet funny mistral slerp merge.
Prompt Template: Unknown, ChatML maybe?
|
alokabhishek/Mistral-7B-Instruct-v0.2-4bit-AWQ
|
alokabhishek
| 2024-03-24T20:44:47Z | 78 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"4bit",
"AWQ",
"AutoAWQ",
"7b",
"quantized",
"Mistral",
"Mistral-7B",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-24T20:30:38Z |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- 4bit
- AWQ
- AutoAWQ
- 7b
- quantized
- Mistral
- Mistral-7B
---
# Model Card for alokabhishek/Mistral-7B-Instruct-v0.2-4bit-AWQ
<!-- Provide a quick summary of what the model is/does. -->
This repo contains 4-bit quantized (using AutoAWQ) model of Mistral AI_'s Mistral-7B-Instruct-v0.2
AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration is developed by MIT-HAN-Lab
## Model Details
- Model creator: [Mistral AI_](https://huggingface.co/mistralai)
- Original model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
### About 4 bit quantization using AutoAWQ
- AutoAWQ github repo: [AutoAWQ github repo](https://github.com/casper-hansen/AutoAWQ/tree/main)
- MIT-han-lab llm-aws github repo: [MIT-han-lab llm-aws github repo](https://github.com/mit-han-lab/llm-awq/tree/main)
@inproceedings{lin2023awq,
title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Chen, Wei-Ming and Wang, Wei-Chen and Xiao, Guangxuan and Dang, Xingyu and Gan, Chuang and Han, Song},
booktitle={MLSys},
year={2024}
}
# How to Get Started with the Model
Use the code below to get started with the model.
## How to run from Python code
#### First install the package
```shell
!pip install autoawq
!pip install accelerate
```
#### Import
```python
import torch
import os
from torch import bfloat16
from huggingface_hub import login, HfApi, create_repo
from transformers import AutoTokenizer, pipeline
from awq import AutoAWQForCausalLM
```
#### Use a pipeline as a high-level helper
```python
# define the model ID
model_id_llama = "alokabhishek/Mistral-7B-Instruct-v0.2-4bit-AWQ"
# Load model
tokenizer_llama = AutoTokenizer.from_pretrained(model_id_llama, use_fast=True)
model_llama = AutoAWQForCausalLM.from_quantized(model_id_llama, fuse_layer=True, trust_remote_code = False, safetensors = True)
# Set up the prompt and prompt template. Change instruction as per requirements.
prompt_llama = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
fromatted_prompt = f'''<s> [INST] You are a helpful, and fun loving assistant. Always answer as jestfully as possible.[/INST] </s> [INST] {prompt_llama}[/INST]'''
tokens = tokenizer_llama(fromatted_prompt, return_tensors="pt").input_ids.cuda()
# Generate output, adjust parameters as per requirements
generation_output = model_llama.generate(tokens, do_sample=True, temperature=1.7, top_p=0.95, top_k=40, max_new_tokens=512)
# Print the output
print(tokenizer_llama.decode(generation_output[0], skip_special_tokens=True))
```
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
raminass/M10
|
raminass
| 2024-03-24T20:43:17Z | 109 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:raminass/scotus-v10",
"base_model:finetune:raminass/scotus-v10",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-24T19:51:51Z |
---
license: cc-by-sa-4.0
base_model: raminass/scotus-v10
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: M10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M10
This model is a fine-tuned version of [raminass/scotus-v10](https://huggingface.co/raminass/scotus-v10) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9260
- Accuracy: 0.7436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7964 | 1.0 | 2170 | 0.9842 | 0.7103 |
| 0.4447 | 2.0 | 4340 | 0.9222 | 0.7378 |
| 0.2719 | 3.0 | 6510 | 0.9260 | 0.7436 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Pablito47/NER-TOTALAMOUNT
|
Pablito47
| 2024-03-24T20:41:25Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-24T19:59:35Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER-TotalAmount
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-TotalAmount
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
- Precision: 0.8889
- Recall: 0.9308
- F1: 0.9094
- Accuracy: 0.9794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 18 | 0.5186 | 0.0524 | 0.0440 | 0.0479 | 0.8261 |
| No log | 2.0 | 36 | 0.2669 | 0.3287 | 0.3679 | 0.3472 | 0.8936 |
| No log | 3.0 | 54 | 0.1462 | 0.725 | 0.8208 | 0.7699 | 0.9516 |
| No log | 4.0 | 72 | 0.0991 | 0.8006 | 0.8962 | 0.8457 | 0.9668 |
| No log | 5.0 | 90 | 0.0937 | 0.8421 | 0.9057 | 0.8727 | 0.9718 |
| No log | 6.0 | 108 | 0.0774 | 0.8813 | 0.9340 | 0.9069 | 0.9775 |
| No log | 7.0 | 126 | 0.0764 | 0.8710 | 0.9340 | 0.9014 | 0.9794 |
| No log | 8.0 | 144 | 0.0753 | 0.8824 | 0.9434 | 0.9119 | 0.9794 |
| No log | 9.0 | 162 | 0.0831 | 0.8689 | 0.9591 | 0.9118 | 0.9775 |
| No log | 10.0 | 180 | 0.0871 | 0.8696 | 0.9434 | 0.9050 | 0.9783 |
| No log | 11.0 | 198 | 0.0906 | 0.8794 | 0.9403 | 0.9088 | 0.9786 |
| No log | 12.0 | 216 | 0.0843 | 0.8832 | 0.9277 | 0.9049 | 0.9779 |
| No log | 13.0 | 234 | 0.0882 | 0.8892 | 0.9591 | 0.9228 | 0.9802 |
| No log | 14.0 | 252 | 0.0977 | 0.8779 | 0.9497 | 0.9124 | 0.9786 |
| No log | 15.0 | 270 | 0.0831 | 0.8919 | 0.9340 | 0.9124 | 0.9794 |
| No log | 16.0 | 288 | 0.0881 | 0.8876 | 0.9434 | 0.9146 | 0.9802 |
| No log | 17.0 | 306 | 0.0898 | 0.8728 | 0.9497 | 0.9096 | 0.9794 |
| No log | 18.0 | 324 | 0.0890 | 0.8856 | 0.9497 | 0.9165 | 0.9809 |
| No log | 19.0 | 342 | 0.0900 | 0.8830 | 0.9497 | 0.9152 | 0.9805 |
| No log | 20.0 | 360 | 0.0933 | 0.8886 | 0.9528 | 0.9196 | 0.9809 |
| No log | 21.0 | 378 | 0.0941 | 0.8912 | 0.9528 | 0.9210 | 0.9805 |
| No log | 22.0 | 396 | 0.0979 | 0.8909 | 0.9497 | 0.9193 | 0.9798 |
| No log | 23.0 | 414 | 0.0998 | 0.8935 | 0.9497 | 0.9207 | 0.9802 |
| No log | 24.0 | 432 | 0.0975 | 0.8889 | 0.9308 | 0.9094 | 0.9794 |
| No log | 25.0 | 450 | 0.0973 | 0.8889 | 0.9308 | 0.9094 | 0.9794 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.12.0
- Tokenizers 0.15.1
|
javijer/llama2-alpaca
|
javijer
| 2024-03-24T20:41:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"base_model:finetune:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-24T20:40:25Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** javijer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sayan18/finetune_starcoder2
|
Sayan18
| 2024-03-24T20:36:37Z | 21 | 3 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:bigcode/the-stack-smol",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2024-03-22T13:01:33Z |
---
license: bigcode-openrail-m
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: bigcode/starcoder2-3b
model-index:
- name: finetune_starcoder2
results: []
datasets:
- bigcode/the-stack-smol
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_starcoder2
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on [bigcode/the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol).
## Model description
This fine-tuned model builds upon the `bigcode/starcoder2-3b` base model, further specializing it for code completion tasks using the `bigcode/the-stack-smol` dataset on SQL data. This dataset focuses on code snippets and solutions, allowing the model to suggest relevant completions and potentially even generate code based on your prompts.
## Intended uses & limitations
This Text-to-SQL generator is designed to bridge the gap between users and databases. Here are some of its key intended uses:
* **Non-programmers interacting with databases:** Users who are unfamiliar with writing SQL queries can leverage this tool to ask questions about the database in natural language and get the corresponding SQL query generated. This allows them to access and analyze data without needing programming expertise.
* **Data exploration and analysis:** Analysts or researchers can use the Text-to-SQL generator to quickly formulate queries for exploratory data analysis. It can save time by automatically generating basic SQL queries, allowing users to focus on refining their questions and interpreting the results.
* **Automating repetitive tasks:** For tasks requiring frequent execution of similar SQL queries based on changing parameters, the Text-to-SQL generator can automate the process of generating the queries. This can improve efficiency and reduce errors.
* **Learning SQL:** Beginners can use the Text-to-SQL generator to experiment with natural language prompts and see the corresponding SQL queries. This can be a helpful tool for understanding the relationship between natural language and SQL syntax, aiding in learning the basics of SQL.
## Limitations of the Text-to-SQL Generator
While this tool offers a convenient way to generate SQL queries, it's important to be aware of its limitations:
* **Complexity:** The Text-to-SQL generator might struggle with highly complex queries involving advanced SQL features (e.g., joins with multiple conditions, subqueries). It's best suited for simpler queries that can be expressed in natural language.
* **Accuracy:** The generated SQL queries might not always be perfect. The model might misinterpret the user's intent or generate syntactically incorrect queries. It's crucial to review and potentially edit the generated SQL before running it on the database.
* **Domain-specific knowledge:** The Text-to-SQL generator might not understand the specific terminology or structure of your database. If your database schema or data contains domain-specific terms, you might need to adjust the natural language prompts to ensure accurate query generation.
* **Security:** It's important to be cautious when using the Text-to-SQL generator with sensitive data. Ensure the tool doesn't introduce security vulnerabilities by generating unintended queries or exposing sensitive information.
## Training and evaluation data
More information needed
## Training procedure
**1. Load Dataset and Model:**
- Load the `bigcode/the-stack-smol` dataset using the Hugging Face Datasets library.
- Filter for the specified subset (`data/sql`) and split (`train`).
- Load the `bigcode/starcoder2-3b` model from the Hugging Face Hub with '4-bit' quantization.
**2. Preprocess Data:**
- Tokenize the code text using the appropriate tokenizer for the chosen model.
- Apply necessary cleaning or normalization (e.g., removing comments, handling indentation).
- Create input examples suitable for the model's architecture (e.g., with masked language modeling objectives).
**3. Configure Training:**
- Initialize a Trainer object (likely from a library like Transformers).
- Set training arguments based on the provided `args`:
- Learning rate, optimizer, scheduler
- Gradient accumulation steps
- Weight decay
- Loss function (likely cross-entropy)
- Evaluation metrics (e.g., accuracy, perplexity)
- Device placement (GPU/TPU)
- Number of processes for potential distributed training
**4. Train the Model:**
- Start the training loop for the specified `max_steps`.
- Iterate through batches of preprocessed code examples.
- Forward pass through the model to generate predictions.
- Calculate loss based on ground truth and predictions.
- Backpropagate gradients to update model parameters.
**5. Evaluation (Optional):**
- Periodically evaluate model performance on a validation or test set.
- Calculate relevant metrics (accuracy, perplexity, code completion accuracy).
- Monitor training progress and adjust hyperparameters as needed.
**6. Save the Fine-tuned Model:**
- Save the model's weights and configuration to the `output_dir`.
**7. Push to Hugging Face Hub (Optional):**
- If `push_to_hub` is True, create a model card and push the model to Hugging Face Hub for sharing and use.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
[](https://huggingface.co/Sayan18/finetune_starcoder2/resolve/main/Training%20Results/W%26B%20Chart%203_25_2024%2C%2012_18_08%20AM.png)
[](https://huggingface.co/Sayan18/finetune_starcoder2/resolve/main/Training%20Results/W%26B%20Chart%203_25_2024%2C%2012_19_35%20AM.png)
[](https://huggingface.co/Sayan18/finetune_starcoder2/resolve/main/Training%20Results/W%26B%20Chart%203_25_2024%2C%2012_19_43%20AM.png)
[](https://huggingface.co/Sayan18/finetune_starcoder2/resolve/main/Training%20Results/W%26B%20Chart%203_25_2024%2C%2012_19_54%20AM.png)
[](https://huggingface.co/Sayan18/finetune_starcoder2/resolve/main/Training%20Results/W%26B%20Chart%203_25_2024%2C%2012_20_00%20AM.png)
### Framework versions
- PEFT 0.8.2
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
bartowski/mistral-orpo-capybara-7k-GGUF
|
bartowski
| 2024-03-24T20:36:22Z | 145 | 2 | null |
[
"gguf",
"text-generation",
"en",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:quantized:mistralai/Mistral-7B-v0.1",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-03-24T20:23:15Z |
---
language:
- en
license: mit
base_model:
- mistralai/Mistral-7B-v0.1
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
pipeline_tag: text-generation
model-index:
- name: Mistral-ORPO-Capybara-7k
results:
- task:
type: text-generation
dataset:
name: AlpacaEval 2 (LC)
type: AlpacaEval
metrics:
- type: AlpacaEval 2.0
value: 15.88%
name: Win Rate
source:
url: https://tatsu-lab.github.io/alpaca_eval/
name: self-reported
- task:
type: text-generation
dataset:
name: MT-Bench
type: MT-Bench
metrics:
- type: MT-Bench
value: 7.444
name: Score
source:
url: https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/
name: self-reported
quantized_by: bartowski
---
## Llamacpp Quantizations of mistral-orpo-capybara-7k
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization.
Original model: https://huggingface.co/kaist-ai/mistral-orpo-capybara-7k
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [mistral-orpo-capybara-7k-Q8_0.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
| [mistral-orpo-capybara-7k-Q6_K.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
| [mistral-orpo-capybara-7k-Q5_K_M.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
| [mistral-orpo-capybara-7k-Q5_K_S.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
| [mistral-orpo-capybara-7k-Q5_0.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
| [mistral-orpo-capybara-7k-Q4_K_M.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
| [mistral-orpo-capybara-7k-Q4_K_S.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
| [mistral-orpo-capybara-7k-IQ4_NL.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Good quality, similar to Q4_K_S, new method of quanting, |
| [mistral-orpo-capybara-7k-IQ4_XS.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. |
| [mistral-orpo-capybara-7k-Q4_0.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
| [mistral-orpo-capybara-7k-IQ3_M.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. |
| [mistral-orpo-capybara-7k-IQ3_S.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
| [mistral-orpo-capybara-7k-Q3_K_L.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
| [mistral-orpo-capybara-7k-Q3_K_M.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
| [mistral-orpo-capybara-7k-Q3_K_S.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
| [mistral-orpo-capybara-7k-Q2_K.gguf](https://huggingface.co/bartowski/mistral-orpo-capybara-7k-GGUF/blob/main/mistral-orpo-capybara-7k-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Lewdiculous/flammen10-mistral-7B-GGUF-IQ-Imatrix-Testing
|
Lewdiculous
| 2024-03-24T20:35:22Z | 61 | 2 | null |
[
"gguf",
"experimental",
"testing",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-24T20:01:51Z |
---
license: apache-2.0
tags:
- experimental
- testing
- gguf
---
**These are test-quants for an experimental model.**
Original model weights: <br> https://huggingface.co/nbeerbower/flammen10-mistral-7B
|
niryuu/Karasu-1.1b-task1357-vector
|
niryuu
| 2024-03-24T20:31:59Z | 141 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T19:31:59Z |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rsousa/ddpm-celebahq-finetuned-butterflies-2epochs
|
rsousa
| 2024-03-24T20:31:57Z | 45 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-03-24T20:31:34Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('rsousa/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
sayak0809/mistral-mental-health-FT-2ep
|
sayak0809
| 2024-03-24T20:31:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-24T20:31:41Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** sayak0809
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
blockblockblock/SlushySlerp-7B-bpw4.2
|
blockblockblock
| 2024-03-24T20:20:28Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-03-22T01:27:49Z |

a lewd, yet funny mistral slerp merge.
Prompt Template: Unknown, ChatML maybe?
|
jishnunair/distilBert_NER_finer
|
jishnunair
| 2024-03-24T20:11:31Z | 114 | 1 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:nlpaueb/finer-139",
"base_model:distilbert/distilbert-base-cased",
"base_model:quantized:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-24T07:48:17Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBert_NER_finer
results: []
datasets:
- nlpaueb/finer-139
language:
- en
pipeline_tag: token-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBert_NER_finer
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the [Finer-139](https://huggingface.co/datasets/nlpaueb/finer-139) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0198
- Precision: 0.9445
- Recall: 0.9640
- F1: 0.9541
- Accuracy: 0.9954
## Training and evaluation data
The training data consists of the 4 most widely available ner_tags from the Finer-139 dataset. The training and the test data were curated from this source accordingly
## Prediction procedure
```
from transformers import TAutoTokenizer
from optimum.onnxruntime import ORTModelForTokenClassification
import torch
def onnx_inference(checkpoint, test_data, export=False):
test_text = " ".join(test_data['tokens'])
print("Test Text: " + test_text)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = ORTModelForTokenClassification.from_pretrained(checkpoint, export=export)
inputs = tokenizer(test_text, return_tensors="pt")
outputs = model(**inputs).logits
predictions = torch.argmax(outputs, dim=2)
# Convert each tensor element to a scalar before calling .item()
predicted_token_class = [label_list[int(t)] for t in predictions[0]]
ner_tags = [label_list[int(t)] for t in test_data['ner_tags']]
print("Original Tags: ")
print(ner_tags)
print("Predicted Tags: ")
print(predicted_token_class)
onnx_model_path = "" #add the path
onnx_inference(onnx_model_path, test_data)
"""
Here the test_data should contain "tokens" and "ner_tags". This can be of type Dataset.
"""
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0034 | 1.0 | 1620 | 0.0261 | 0.9167 | 0.9668 | 0.9411 | 0.9941 |
| 0.0031 | 2.0 | 3240 | 0.0182 | 0.9471 | 0.9651 | 0.9561 | 0.9956 |
| 0.0012 | 3.0 | 4860 | 0.0198 | 0.9445 | 0.9640 | 0.9541 | 0.9954 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
MikeGreen2710/mlm_listing_2
|
MikeGreen2710
| 2024-03-24T20:06:54Z | 180 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-24T20:06:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gabe-Thomp/path-to-save-model
|
Gabe-Thomp
| 2024-03-24T20:02:05Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-24T19:57:01Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: CompVis/stable-diffusion-v1-4
inference: true
instance_prompt: a photo of sks dog
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Gabe-Thomp/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
blockblockblock/SlushySlerp-7B-bpw3.7
|
blockblockblock
| 2024-03-24T19:55:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-03-22T01:27:42Z |

a lewd, yet funny mistral slerp merge.
Prompt Template: Unknown, ChatML maybe?
|
pepijn223/rl_course_vizdoom_health_gathering_supreme
|
pepijn223
| 2024-03-24T19:52:24Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-24T19:52:15Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.03 +/- 4.83
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r pepijn223/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
haryoaw/scenario-TCR_data-cl-cardiff_cl_only1
|
haryoaw
| 2024-03-24T19:50:15Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-24T19:49:20Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: scenario-TCR_data-cl-cardiff_cl_only1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scenario-TCR_data-cl-cardiff_cl_only1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6766
- Accuracy: 0.5077
- F1: 0.5080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 112
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.09 | 250 | 1.0916 | 0.5062 | 0.5083 |
| 0.9115 | 2.17 | 500 | 1.1929 | 0.5031 | 0.5019 |
| 0.9115 | 3.26 | 750 | 1.2608 | 0.5077 | 0.5074 |
| 0.5614 | 4.35 | 1000 | 1.7305 | 0.5062 | 0.4996 |
| 0.5614 | 5.43 | 1250 | 1.9217 | 0.5108 | 0.5104 |
| 0.2935 | 6.52 | 1500 | 1.9631 | 0.5116 | 0.5097 |
| 0.2935 | 7.61 | 1750 | 2.2019 | 0.4900 | 0.4880 |
| 0.1741 | 8.7 | 2000 | 3.0038 | 0.4961 | 0.4934 |
| 0.1741 | 9.78 | 2250 | 2.8681 | 0.4869 | 0.4878 |
| 0.1225 | 10.87 | 2500 | 3.2569 | 0.4907 | 0.4917 |
| 0.1225 | 11.96 | 2750 | 3.2315 | 0.4853 | 0.4854 |
| 0.0918 | 13.04 | 3000 | 3.1706 | 0.4915 | 0.4924 |
| 0.0918 | 14.13 | 3250 | 3.4457 | 0.4799 | 0.4754 |
| 0.0622 | 15.22 | 3500 | 3.8112 | 0.4977 | 0.4982 |
| 0.0622 | 16.3 | 3750 | 3.7257 | 0.5069 | 0.5081 |
| 0.0425 | 17.39 | 4000 | 3.9484 | 0.5069 | 0.5085 |
| 0.0425 | 18.48 | 4250 | 4.0832 | 0.5100 | 0.5095 |
| 0.0251 | 19.57 | 4500 | 4.2009 | 0.5170 | 0.5169 |
| 0.0251 | 20.65 | 4750 | 4.5054 | 0.4961 | 0.4926 |
| 0.0205 | 21.74 | 5000 | 4.5404 | 0.5062 | 0.5064 |
| 0.0205 | 22.83 | 5250 | 4.7252 | 0.4977 | 0.4977 |
| 0.0159 | 23.91 | 5500 | 4.7574 | 0.5046 | 0.5020 |
| 0.0159 | 25.0 | 5750 | 4.7681 | 0.5062 | 0.5074 |
| 0.0112 | 26.09 | 6000 | 4.6950 | 0.5 | 0.4984 |
| 0.0112 | 27.17 | 6250 | 4.8415 | 0.4931 | 0.4901 |
| 0.0097 | 28.26 | 6500 | 4.6840 | 0.5031 | 0.5032 |
| 0.0097 | 29.35 | 6750 | 4.6766 | 0.5077 | 0.5080 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ADG-2353/dqn-SpaceInvadersNoFrameskip-v4
|
ADG-2353
| 2024-03-24T19:49:23Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-24T19:48:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 655.00 +/- 228.35
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ADG-2353 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ADG-2353 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ADG-2353
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Flamgrise/short-stories-lol-fine-tuned
|
Flamgrise
| 2024-03-24T19:46:32Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text-classification",
"generated_from_trainer",
"base_model:facebook/bart-large-mnli",
"base_model:finetune:facebook/bart-large-mnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-24T19:45:51Z |
---
license: mit
base_model: facebook/bart-large-mnli
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: short-stories-lol-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# short-stories-lol-fine-tuned
This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7862
- F1: 0.1353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 44 | 1.7573 | 0.0698 |
| No log | 2.0 | 88 | 1.7508 | 0.0569 |
| No log | 3.0 | 132 | 1.8932 | 0.0569 |
| No log | 4.0 | 176 | 1.9888 | 0.1432 |
| No log | 5.0 | 220 | 2.4816 | 0.0702 |
| No log | 6.0 | 264 | 2.8377 | 0.0859 |
| No log | 7.0 | 308 | 3.3364 | 0.1256 |
| No log | 8.0 | 352 | 3.2681 | 0.1017 |
| No log | 9.0 | 396 | 3.5178 | 0.1179 |
| No log | 10.0 | 440 | 3.7246 | 0.1548 |
| No log | 11.0 | 484 | 3.7577 | 0.1353 |
| 0.6317 | 12.0 | 528 | 3.7862 | 0.1353 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
bartowski/Mistral-7B-v0.1-ORPO-GGUF
|
bartowski
| 2024-03-24T19:44:56Z | 189 | 1 |
transformers
|
[
"transformers",
"gguf",
"orpo",
"qlora",
"trl",
"text-generation",
"en",
"dataset:alvarobartt/dpo-mix-7k-simplified",
"dataset:argilla/dpo-mix-7k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:quantized:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-03-24T19:31:23Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- orpo
- qlora
- trl
datasets:
- alvarobartt/dpo-mix-7k-simplified
- argilla/dpo-mix-7k
base_model: mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
inference: false
quantized_by: bartowski
---
## Llamacpp Quantizations of Mistral-7B-v0.1-ORPO
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization.
Original model: https://huggingface.co/alvarobartt/Mistral-7B-v0.1-ORPO
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-7B-v0.1-ORPO-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
| [Mistral-7B-v0.1-ORPO-Q6_K.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
| [Mistral-7B-v0.1-ORPO-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
| [Mistral-7B-v0.1-ORPO-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
| [Mistral-7B-v0.1-ORPO-Q5_0.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
| [Mistral-7B-v0.1-ORPO-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
| [Mistral-7B-v0.1-ORPO-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
| [Mistral-7B-v0.1-ORPO-IQ4_NL.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Good quality, similar to Q4_K_S, new method of quanting, |
| [Mistral-7B-v0.1-ORPO-IQ4_XS.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. |
| [Mistral-7B-v0.1-ORPO-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
| [Mistral-7B-v0.1-ORPO-IQ3_M.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. |
| [Mistral-7B-v0.1-ORPO-IQ3_S.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
| [Mistral-7B-v0.1-ORPO-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
| [Mistral-7B-v0.1-ORPO-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
| [Mistral-7B-v0.1-ORPO-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
| [Mistral-7B-v0.1-ORPO-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.1-ORPO-GGUF/blob/main/Mistral-7B-v0.1-ORPO-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Praneeth/StarMix-7B-slerp
|
Praneeth
| 2024-03-24T19:41:39Z | 57 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"berkeley-nest/Starling-LM-7B-alpha",
"mistralai/Mistral-7B-Instruct-v0.2",
"conversational",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:merge:berkeley-nest/Starling-LM-7B-alpha",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T20:33:01Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- berkeley-nest/Starling-LM-7B-alpha
- mistralai/Mistral-7B-Instruct-v0.2
base_model:
- berkeley-nest/Starling-LM-7B-alpha
- mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: StarMix-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.81
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
---
# StarMix-7B-slerp
StarMix-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 32]
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Praneeth__StarMix-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.41|
|AI2 Reasoning Challenge (25-Shot)|65.36|
|HellaSwag (10-Shot) |85.10|
|MMLU (5-Shot) |62.57|
|TruthfulQA (0-shot) |57.81|
|Winogrande (5-shot) |79.95|
|GSM8k (5-shot) |53.68|
|
dasasdsa/ChillsXD
|
dasasdsa
| 2024-03-24T19:40:29Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-03-24T19:40:29Z |
---
license: apache-2.0
license_name: rvc
license_link: LICENSE
---
|
uoseftalaat/whisper-small-final-v4
|
uoseftalaat
| 2024-03-24T19:36:55Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ara",
"dataset:AsemBadr/GP",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-24T09:16:06Z |
---
language:
- ara
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- AsemBadr/GP
metrics:
- wer
model-index:
- name: Whisper Small for Quran Recognition
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Quran_Reciters
type: AsemBadr/GP
config: default
split: test
args: 'config: default, split: train'
metrics:
- name: Wer
type: wer
value: 3.2834794567646557
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small for Quran Recognition
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Quran_Reciters dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0188
- Wer: 3.2835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0059 | 1.62 | 500 | 0.0259 | 5.8277 |
| 0.0019 | 3.24 | 1000 | 0.0223 | 4.1430 |
| 0.0007 | 4.85 | 1500 | 0.0211 | 4.0055 |
| 0.0003 | 6.47 | 2000 | 0.0198 | 3.4726 |
| 0.0 | 8.09 | 2500 | 0.0191 | 3.3351 |
| 0.0 | 9.71 | 3000 | 0.0187 | 3.3007 |
| 0.0 | 11.33 | 3500 | 0.0188 | 3.2491 |
| 0.0 | 12.94 | 4000 | 0.0188 | 3.2835 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.2
|
blockblockblock/SlushySlerp-7B-bpw3
|
blockblockblock
| 2024-03-24T19:31:32Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-03-22T00:12:28Z |

a lewd, yet funny mistral slerp merge.
Prompt Template: Unknown, ChatML maybe?
|
qwp4w3hyb/Cerebrum-1.0-8x7b-imatrix-GGUF
|
qwp4w3hyb
| 2024-03-24T19:31:20Z | 13 | 1 | null |
[
"gguf",
"mixtral",
"conversational",
"finetune",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-21T10:34:29Z |
---
license: apache-2.0
tags:
- mixtral
- conversational
- finetune
---
Better quants based on the f16 available here:
https://huggingface.co/qwp4w3hyb/Cerebrum-1.0-8x7b-iMat-GGUF
# ~Model Card for Cerebrum-1.0-8x7b-imatrix-GGUF~
~Quantized from https://huggingface.co/AetherResearch/Cerebrum-1.0-8x7b using llama.cpp commit 46acb3676718b983157058aecf729a2064fc7d34 utilizing an importance matrix.~
~Quants will be upload with slow german internet so they will appear 1 by 1, stay tuned.~
~imatrix generated with:~
~./imatrix -ofreq 4 -b 512 -c 512 -t 14 --chunks 24 -m ../models/Cerebrum-1.0-8x7b-GGUF/cerebrum-1.0-8x7b-Q8_0.gguf -f ./groups_merged.txt~
~with the dataset from here:
https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384~
~Sadly this means the imatrix is generated from the Q8 instead of the unquantized f16, like it should be, sadly I can't get it to work with the f16 on my machine at the moment. It should still improve the performance of the quants though.~
|
J-Douglas/Novelty
|
J-Douglas
| 2024-03-24T19:30:03Z | 8 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"image-to-image",
"license:mit",
"diffusers:StableDiffusionInstructPix2PixPipeline",
"region:us"
] |
image-to-image
| 2024-03-23T21:42:29Z |
---
license: mit
tags:
- image-to-image
---
# InstructPix2Pix: Learning to Follow Image Editing Instructions
GitHub: https://github.com/timothybrooks/instruct-pix2pix
<img src='https://instruct-pix2pix.timothybrooks.com/teaser.jpg'/>
## Example
To use `InstructPix2Pix`, install `diffusers` using `main` for now. The pipeline will be available in the next release
```bash
pip install diffusers accelerate safetensors transformers
```
```python
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
model_id = "timbrooks/instruct-pix2pix"
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
url = "https://raw.githubusercontent.com/timothybrooks/instruct-pix2pix/main/imgs/example.jpg"
def download_image(url):
image = PIL.Image.open(requests.get(url, stream=True).raw)
image = PIL.ImageOps.exif_transpose(image)
image = image.convert("RGB")
return image
image = download_image(url)
prompt = "turn him into cyborg"
images = pipe(prompt, image=image, num_inference_steps=10, image_guidance_scale=1).images
images[0]
```
|
weezywitasneezy/OxytocinErosEngineering_v0-4x7B-passthrough
|
weezywitasneezy
| 2024-03-24T19:26:53Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"ChaoticNeutrals/Eris_Remix_7B",
"Virt-io/Erebus-Holodeck-7B",
"jeiku/Eros_Prodigadigm_7B",
"Epiculous/Mika-7B",
"base_model:ChaoticNeutrals/Eris_Remix_7B",
"base_model:merge:ChaoticNeutrals/Eris_Remix_7B",
"base_model:Epiculous/Mika-7B",
"base_model:merge:Epiculous/Mika-7B",
"base_model:Virt-io/Erebus-Holodeck-7B",
"base_model:merge:Virt-io/Erebus-Holodeck-7B",
"base_model:jeiku/Eros_Prodigadigm_7B",
"base_model:merge:jeiku/Eros_Prodigadigm_7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T17:23:03Z |
---
tags:
- merge
- mergekit
- lazymergekit
- ChaoticNeutrals/Eris_Remix_7B
- Virt-io/Erebus-Holodeck-7B
- jeiku/Eros_Prodigadigm_7B
- Epiculous/Mika-7B
base_model:
- ChaoticNeutrals/Eris_Remix_7B
- Virt-io/Erebus-Holodeck-7B
- jeiku/Eros_Prodigadigm_7B
- Epiculous/Mika-7B
---
# OxytocinErosEngineering_v0-4x7B-passthrough
OxytocinErosEngineering_v0-4x7B-passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [ChaoticNeutrals/Eris_Remix_7B](https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B)
* [Virt-io/Erebus-Holodeck-7B](https://huggingface.co/Virt-io/Erebus-Holodeck-7B)
* [jeiku/Eros_Prodigadigm_7B](https://huggingface.co/jeiku/Eros_Prodigadigm_7B)
* [Epiculous/Mika-7B](https://huggingface.co/Epiculous/Mika-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: ChaoticNeutrals/Eris_Remix_7B
layer_range: [0, 32]
- sources:
- model: Virt-io/Erebus-Holodeck-7B
layer_range: [8, 32]
- sources:
- model: jeiku/Eros_Prodigadigm_7B
layer_range: [16, 32]
- sources:
- model: Epiculous/Mika-7B
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "weezywitasneezy/OxytocinErosEngineering_v0-4x7B-passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Cdywalst/donut-base-handwriting_recognition
|
Cdywalst
| 2024-03-24T19:24:36Z | 84 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-03-21T23:12:52Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-handwriting_recognition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-handwriting_recognition
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
javijer/lora_model_phi2
|
javijer
| 2024-03-24T19:22:48Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T18:49:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blockblockblock/SlushySlerp-7B-bpw2.5
|
blockblockblock
| 2024-03-24T19:19:55Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-03-21T23:47:13Z |

a lewd, yet funny mistral slerp merge.
Prompt Template: Unknown, ChatML maybe?
|
automerger/Experiment28Neuralsirkrishna-7B
|
automerger
| 2024-03-24T19:19:22Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"base_model:finetune:Kukedlc/NeuralSirKrishna-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T19:18:33Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- Kukedlc/NeuralSirKrishna-7b
---
# Experiment28Neuralsirkrishna-7B
Experiment28Neuralsirkrishna-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b)
## 🧩 Configuration
```yaml
models:
- model: yam-peleg/Experiment28-7B
# No parameters necessary for base model
- model: Kukedlc/NeuralSirKrishna-7b
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: yam-peleg/Experiment28-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment28Neuralsirkrishna-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
lunarsylph/gemmacell_v12
|
lunarsylph
| 2024-03-24T19:07:32Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-24T19:03:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
simoneteglia/phi-2-lyrical-genius
|
simoneteglia
| 2024-03-24T19:06:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-24T19:06:34Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.39.1
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.2
|
abacusai/bigstral-12b-v0.2-32k
|
abacusai
| 2024-03-24T18:59:44Z | 7 | 8 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:mistral-community/Mistral-7B-v0.2",
"base_model:finetune:mistral-community/Mistral-7B-v0.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T17:12:38Z |
---
base_model:
- alpindale/Mistral-7B-v0.2-hf
library_name: transformers
tags:
- mergekit
- merge
---

# bigstral-12b-v0.2-32k
\``ollama run ehartford/bigstral`\`
This is Mistral-7B-v0.2 self-interleaved into a larger 12B model using MergeKit. It is intended for further pretraining.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 8]
model: alpindale/Mistral-7B-v0.2-hf
- sources:
- layer_range: [4, 12]
model: alpindale/Mistral-7B-v0.2-hf
- sources:
- layer_range: [8, 16]
model: alpindale/Mistral-7B-v0.2-hf
- sources:
- layer_range: [12, 20]
model: alpindale/Mistral-7B-v0.2-hf
- sources:
- layer_range: [16, 24]
model: alpindale/Mistral-7B-v0.2-hf
- sources:
- layer_range: [20, 28]
model: alpindale/Mistral-7B-v0.2-hf
- sources:
- layer_range: [24, 32]
model: alpindale/Mistral-7B-v0.2-hf
```
|
Goekdeniz-Guelmez/J.O.S.I.E.3-Beta3-slerp
|
Goekdeniz-Guelmez
| 2024-03-24T18:53:42Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Locutusque/Hercules-3.1-Mistral-7B",
"cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:Locutusque/Hercules-3.1-Mistral-7B",
"base_model:merge:Locutusque/Hercules-3.1-Mistral-7B",
"base_model:cognitivecomputations/dolphin-2.8-experiment26-7b",
"base_model:merge:cognitivecomputations/dolphin-2.8-experiment26-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-15T15:25:51Z |
---
tags:
- merge
- mergekit
- lazymergekit
- Locutusque/Hercules-3.1-Mistral-7B
- cognitivecomputations/dolphin-2.8-experiment26-7b
base_model:
- Locutusque/Hercules-3.1-Mistral-7B
- cognitivecomputations/dolphin-2.8-experiment26-7b
license: apache-2.0
---
# JOSIE_Beta-3-7B-slerp
JOSIE_Beta-3-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Locutusque/Hercules-3.1-Mistral-7B](https://huggingface.co/Locutusque/Hercules-3.1-Mistral-7B)
* [cognitivecomputations/dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b)
# IMPORTANT!!!
upon sseing the eval bechmarks on the LLM Leaderboard, this is the best performing model, but it's not uncensored, and it's answers are not really good when chatting with it.
I will further train it one datasets like dolphin and other.
```json
{
"all": {
"acc": 0.6432209013684985,
"acc_stderr": 0.03221665824377992,
"acc_norm": 0.6450099678239628,
"acc_norm_stderr": 0.032867717920871294,
"mc1": 0.3353733170134639,
"mc1_stderr": 0.01652753403966899,
"mc2": 0.48804542326643174,
"mc2_stderr": 0.015087630632446147
},
"harness|arc:challenge|25": {
"acc": 0.6083617747440273,
"acc_stderr": 0.014264122124938217,
"acc_norm": 0.6339590443686007,
"acc_norm_stderr": 0.014077223108470139
},
"harness|hellaswag|10": {
"acc": 0.6618203545110536,
"acc_stderr": 0.0047212316370927225,
"acc_norm": 0.8456482772356104,
"acc_norm_stderr": 0.0036054721167622867
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.03823428969926605,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.03823428969926605
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249387,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249387
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6473988439306358,
"acc_stderr": 0.03643037168958548,
"acc_norm": 0.6473988439306358,
"acc_norm_stderr": 0.03643037168958548
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.574468085106383,
"acc_stderr": 0.03232146916224468,
"acc_norm": 0.574468085106383,
"acc_norm_stderr": 0.03232146916224468
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.046920083813689104,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.046920083813689104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5310344827586206,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.5310344827586206,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41534391534391535,
"acc_stderr": 0.025379524910778405,
"acc_norm": 0.41534391534391535,
"acc_norm_stderr": 0.025379524910778405
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782648,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782648
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.028606204289229872,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.028606204289229872
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563973,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563973
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.029723278961476664,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.029723278961476664
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.030283995525884396,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.030283995525884396
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8422018348623853,
"acc_stderr": 0.01563002297009245,
"acc_norm": 0.8422018348623853,
"acc_norm_stderr": 0.01563002297009245
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5,
"acc_stderr": 0.034099716973523674,
"acc_norm": 0.5,
"acc_norm_stderr": 0.034099716973523674
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7990196078431373,
"acc_stderr": 0.028125972265654366,
"acc_norm": 0.7990196078431373,
"acc_norm_stderr": 0.028125972265654366
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7890295358649789,
"acc_stderr": 0.02655837250266192,
"acc_norm": 0.7890295358649789,
"acc_norm_stderr": 0.02655837250266192
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.036401182719909476,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.036401182719909476
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5535714285714286,
"acc_stderr": 0.04718471485219587,
"acc_norm": 0.5535714285714286,
"acc_norm_stderr": 0.04718471485219587
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281376,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281376
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8186462324393359,
"acc_stderr": 0.01377869377846408,
"acc_norm": 0.8186462324393359,
"acc_norm_stderr": 0.01377869377846408
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.288268156424581,
"acc_stderr": 0.015149132860209432,
"acc_norm": 0.288268156424581,
"acc_norm_stderr": 0.015149132860209432
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.025583062489984813,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.025583062489984813
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.024383665531035457,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.024383665531035457
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5035460992907801,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.5035460992907801,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4680573663624511,
"acc_stderr": 0.012744149704869647,
"acc_norm": 0.4680573663624511,
"acc_norm_stderr": 0.012744149704869647
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462927,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462927
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495158,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495158
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.027979823538744546,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.027979823538744546
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578337,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578337
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.02991312723236804,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.02991312723236804
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3353733170134639,
"mc1_stderr": 0.01652753403966899,
"mc2": 0.48804542326643174,
"mc2_stderr": 0.015087630632446147
},
"harness|winogrande|5": {
"acc": 0.8042620363062352,
"acc_stderr": 0.011151145042218319
},
"harness|gsm8k|5": {
"acc": 0.5860500379075056,
"acc_stderr": 0.013566991960151778
}
}
```
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Locutusque/Hercules-3.1-Mistral-7B
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-experiment26-7b
layer_range: [0, 32]
merge_method: slerp
base_model: Locutusque/Hercules-3.1-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Isaak-Carter/JOSIE_Beta-3-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
sphynxlee/LunarLander-v2
|
sphynxlee
| 2024-03-24T18:53:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-24T18:31:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.23 +/- 23.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DaRkSpyro/DianeFoxingtonTheBadGuys
|
DaRkSpyro
| 2024-03-24T18:40:03Z | 0 | 0 |
flair
|
[
"flair",
"music",
"en",
"dataset:HuggingFaceTB/cosmopedia",
"license:apache-2.0",
"region:us"
] | null | 2024-03-24T17:52:34Z |
---
license: apache-2.0
datasets:
- HuggingFaceTB/cosmopedia
language:
- en
metrics:
- accuracy
library_name: flair
tags:
- music
---
|
iamkaikai/OPTICAL-LORA
|
iamkaikai
| 2024-03-24T18:33:13Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-03-24T18:18:37Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - iamkaikai/OPTICAL-LORA
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the iamkaikai/OPTICAL-ART dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
jsubramani/fusion-force-all-MiniLM-L6-v2
|
jsubramani
| 2024-03-24T18:33:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:adapter:sentence-transformers/all-MiniLM-L6-v2",
"region:us"
] | null | 2024-03-24T18:30:41Z |
---
library_name: peft
base_model: sentence-transformers/all-MiniLM-L6-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
|
mohammedaly22/Codify-LLama-2-7b
|
mohammedaly22
| 2024-03-24T18:30:07Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-22T05:55:32Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
model-index:
- name: codify-llama-2-7b
results: []
---
# codify-llama-2-7b
This model is a fine-tuned version of [Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the [ALPACA_20k](https://raw.githubusercontent.com/sahil280114/codealpaca/master/data/code_alpaca_20k.json) dataset.
## Intended uses & limitations
1. Load the model as a Hugging Face Pipeline:
```Python
from transformers import pipeline
pipe = pipeline('text-generation', model='mohammedaly22/Codify-LLama-2-7b')
```
2. Prepare the instruction template
```Python
from string import Template
prompt_template_inference = Template("""You are a world class software engineer answering coding questions. Below is an
instruction that describes a coding task, paired with an optional input that
provides further context. Write a response that accurately completes the task if
the instruction is code-related, else, you should reponse that you don't know the answer
as it is outside the context of coding. Note, you should stop generation after reaching the <EOG> token.
### Instruction:
$instruction
### Input:
$input
### Response:
""")
```
3. Create an instruction prompt using the above template
```Python
instruction = "Write a Python function that creates a simple 2-layer neural network using Keras for performing binary classification"
input = "input shape of the neural network will be a vector of 200 elements"
prompt = prompt_template_inference.substitute({"instruction": instruction, "input": input})
```
This is the final instruction prompt that will be passed to the pipeline
```
You are a world class software engineer answering coding questions. Below is an
instruction that describes a coding task, paired with an optional input that
provides further context. Write a response that accurately completes the task if
the instruction is code-related, else, you should reponse that you don't know the answer
as it is outside the context of coding. Note, you should stop generation after reaching the <EOG> token.
### Instruction:
Write a Python function that creates a simple 2-layer neural network using Keras for performing binary classification
### Input:
input shape of the neural network will be a vector of 200 elements
### Response:
```
4. Passing the instruction prompt to the pipeline
```python
output = pipe(
prompt,
do_sample=True,
return_full_text=False,
max_new_tokens=200,
clean_up_tokenization_spaces=True
)
```
Here is the generated code of the model:
```python
def build_simple_neural_network():
return Model(
inputs=Input(shape=(200,)),
outputs=Dense(2, activation="softmax"),
name="simple_neural_network"
)
<EOG>
```
## Training procedure
### BitsAndBytes hyperparameters
- use_4bit: True
- bnb_4bit_compute_dtype: "float16"
- bnb_4bit_quant_type: "nf4"
- use_double_nested_quant: False
### LoRA configurations
- lora_r: 64
- lora_alpha: 16
- lora_dropout: 0.1
### Training hyperparameters
The following hyperparameters were used during training:
- num_train_epochs: 1
- fp16: False
- bf16: False
- per_device_train_batch_size: 4
- per_device_eval_batch_size: 4
- gradient_accumulation_steps: 1
- gradient_checkpointing: True
- max_grad_norm: 0.3
- learning_rate: 2e-4
- weight_decay: 0.001
- optim: "paged_adamw_32bit"
- lr_scheduler_type: "cosine"
- max_steps: -1
- warmup_ratio: 0.03
- group_by_length: True
- save_steps: 0
- logging_steps: 50
### Training results
| Step | Training Loss |
|:-----:|:-------------:|
| 50 | 1.377900 |
| 100 | 0.368700 |
| 150 | 0.336600 |
| 200 | 0.334800 |
| 250 | 0.332300 |
| 300 | 0.333700 |
| 350 | 0.322100 |
| 400 | 0.317000 |
| 450 | 0.320800 |
| 500 | 0.308400 |
| 550 | 0.321900 |
| 600 | 0.310700 |
| 650 | 0.322100 |
| 700 | 0.327700 |
| 750 | 0.322000 |
| 800 | 0.311300 |
| 850 | 0.321800 |
| 900 | 0.318700 |
| 950 | 0.321600 |
| 1000 | 0.314900 |
| 1050 | 0.321700 |
| 1100 | 0.307600 |
| 1150 | 0.315800 |
| 1200 | 0.316800 |
| 1250 | 0.314200 |
| 1300 | 0.310400 |
| 1350 | 0.308000 |
| 1400 | 0.318600 |
| 1450 | 0.309700 |
| 1500 | 0.307600 |
| 1550 | 0.296800 |
| 1600 | 0.305800 |
| 1650 | 0.307400 |
| 1700 | 0.327400 |
| 1750 | 0.306100 |
| 1800 | 0.309900 |
| 1850 | 0.316300 |
| 1900 | 0.299500 |
| 1950 | 0.315700 |
| 2000 | 0.307600 |
|
dima1234321/shared_audio
|
dima1234321
| 2024-03-24T18:25:35Z | 166 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:vitouphy/wav2vec2-xls-r-300m-timit-phoneme",
"base_model:finetune:vitouphy/wav2vec2-xls-r-300m-timit-phoneme",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-15T14:56:42Z |
---
license: apache-2.0
base_model: vitouphy/wav2vec2-xls-r-300m-timit-phoneme
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: shared_audio
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shared_audio
This model is a fine-tuned version of [vitouphy/wav2vec2-xls-r-300m-timit-phoneme](https://huggingface.co/vitouphy/wav2vec2-xls-r-300m-timit-phoneme) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 66.1707
- Wer: 1.2143
- Cer: 9.2148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 58.4649 | 0.03 | 2 | 66.1707 | 1.2143 | 9.2148 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Pardner/whisper-small-fa
|
Pardner
| 2024-03-24T18:21:10Z | 89 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"fa",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-24T01:17:58Z |
---
language:
- fa
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Small Fa - Brett OConnor
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.0
type: mozilla-foundation/common_voice_16_0
config: fa
split: None
args: 'config: fa, split: test'
metrics:
- name: Wer
type: wer
value: 36.3317501910689
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Fa - Brett OConnor
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3430
- Wer: 36.3318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2622 | 0.41 | 1000 | 0.4714 | 46.2155 |
| 0.2145 | 0.81 | 2000 | 0.4000 | 42.0843 |
| 0.1135 | 1.22 | 3000 | 0.3757 | 38.7570 |
| 0.1198 | 1.63 | 4000 | 0.3489 | 36.7330 |
| 0.0721 | 2.03 | 5000 | 0.3430 | 36.3318 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
12345deena/t5baseofb4
|
12345deena
| 2024-03-24T18:18:19Z | 162 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-24T12:08:21Z |
---
license: apache-2.0
base_model: google-t5/t5-base
tags:
- generated_from_trainer
model-index:
- name: t5baseofb4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5baseofb4
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
davidyinyang/finetuning-sentiment-model-3000-samples
|
davidyinyang
| 2024-03-24T18:09:20Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-29T18:02:49Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3281
- Accuracy: 0.8667
- F1: 0.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
pepijn223/ppo-LunarLander-v2-unit-1
|
pepijn223
| 2024-03-24T18:08:01Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-24T18:07:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.91 +/- 14.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cherryboi/gemma-Dutch-20steps-v0.1
|
cherryboi
| 2024-03-24T18:04:29Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T17:59:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gonzalezrostani/my_awesome_wnut_all_JAOo
|
gonzalezrostani
| 2024-03-24T18:02:56Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-24T17:39:18Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_all_JAOo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_all_JAOo
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0448
- Precision: 0.4554
- Recall: 0.4182
- F1: 0.4360
- Accuracy: 0.9897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 251 | 0.0374 | 0.5556 | 0.2273 | 0.3226 | 0.9895 |
| 0.051 | 2.0 | 502 | 0.0357 | 0.4872 | 0.3455 | 0.4043 | 0.9898 |
| 0.051 | 3.0 | 753 | 0.0432 | 0.3504 | 0.4364 | 0.3887 | 0.9871 |
| 0.0178 | 4.0 | 1004 | 0.0448 | 0.4554 | 0.4182 | 0.4360 | 0.9897 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
0x9/matrix-large-0.7B-v2
|
0x9
| 2024-03-24T17:59:54Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T12:42:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
balakhonoff/solidity_security_model
|
balakhonoff
| 2024-03-24T17:48:43Z | 2 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-03-24T17:48:00Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
raminass/M7
|
raminass
| 2024-03-24T17:48:15Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:raminass/scotus-v10",
"base_model:finetune:raminass/scotus-v10",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-24T17:01:31Z |
---
license: cc-by-sa-4.0
base_model: raminass/scotus-v10
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: M7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M7
This model is a fine-tuned version of [raminass/scotus-v10](https://huggingface.co/raminass/scotus-v10) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9286
- Accuracy: 0.7485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7812 | 1.0 | 2170 | 0.9959 | 0.7109 |
| 0.4283 | 2.0 | 4340 | 0.9299 | 0.7408 |
| 0.2607 | 3.0 | 6510 | 0.9286 | 0.7485 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
joen2010/ppo-LunarLander-v2
|
joen2010
| 2024-03-24T17:43:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-24T17:39:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.92 +/- 17.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
OfficialAC18/LLaMA-2-7B-guanaco
|
OfficialAC18
| 2024-03-24T17:41:52Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"llama",
"general",
"llm",
"text-generation",
"en",
"dataset:mlabonne/guanaco-llama2-1k",
"license:mit",
"region:us"
] |
text-generation
| 2024-03-24T16:57:39Z |
---
license: mit
datasets:
- mlabonne/guanaco-llama2-1k
language:
- en
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- general
- llm
---
|
zeon8985army/KhalsuuTagalog-fineTunedLargeV2-tl-1
|
zeon8985army
| 2024-03-24T17:41:43Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"id-asr-leaderboard",
"generated_from_trainer",
"id",
"dataset:-",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-24T17:41:33Z |
---
language:
- id
license: apache-2.0
library_name: peft
tags:
- id-asr-leaderboard
- generated_from_trainer
datasets:
- '-'
base_model: openai/whisper-large-v2
model-index:
- name: zeon8985army/KhalsuuTagalog-fineTunedLargeV2-tl-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zeon8985army/KhalsuuTagalog-fineTunedLargeV2-tl-1
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the AlistairBegg dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 12
- training_steps: 276
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.26 | 0.05 | 12 | 3.5752 |
| 2.9122 | 0.1 | 24 | 2.9222 |
| 2.1061 | 0.15 | 36 | 1.9585 |
| 1.6265 | 0.2 | 48 | 1.5771 |
| 1.3155 | 0.25 | 60 | 1.3177 |
| 1.1793 | 0.3 | 72 | 1.1780 |
| 1.0208 | 0.35 | 84 | 1.0613 |
| 0.9111 | 0.4 | 96 | 0.9361 |
| 0.8078 | 0.44 | 108 | 0.7634 |
| 0.6103 | 0.49 | 120 | 0.4794 |
| 0.3911 | 0.54 | 132 | 0.2857 |
| 0.324 | 0.59 | 144 | 0.2915 |
| 0.3228 | 0.64 | 156 | 0.2858 |
| 0.3103 | 0.69 | 168 | 0.2778 |
| 0.3458 | 0.74 | 180 | 0.2750 |
| 0.3283 | 0.79 | 192 | 0.2738 |
| 0.394 | 0.84 | 204 | 0.2728 |
| 0.3181 | 0.89 | 216 | 0.2711 |
| 0.3201 | 0.94 | 228 | 0.2704 |
| 0.3291 | 0.99 | 240 | 0.2705 |
| 0.3084 | 1.04 | 252 | 0.2709 |
| 0.32 | 1.09 | 264 | 0.2711 |
| 0.363 | 1.14 | 276 | 0.2708 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
gonzalezrostani/my_awesome_wnut_all_JAOa
|
gonzalezrostani
| 2024-03-24T17:39:09Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-24T17:02:03Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_all_JAOa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_all_JAOa
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0970
- Precision: 0.4829
- Recall: 0.4652
- F1: 0.4739
- Accuracy: 0.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 251 | 0.0810 | 0.4868 | 0.3370 | 0.3983 | 0.9748 |
| 0.0831 | 2.0 | 502 | 0.0850 | 0.5333 | 0.3810 | 0.4444 | 0.9759 |
| 0.0831 | 3.0 | 753 | 0.0894 | 0.4906 | 0.4762 | 0.4833 | 0.9750 |
| 0.0431 | 4.0 | 1004 | 0.0970 | 0.4829 | 0.4652 | 0.4739 | 0.9748 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
dmayboroda/daw-llm-mistral-midi
|
dmayboroda
| 2024-03-24T17:37:15Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T17:22:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
duytu/scibert_ner_drugname
|
duytu
| 2024-03-24T17:35:42Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_cased",
"base_model:finetune:allenai/scibert_scivocab_cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-25T01:44:11Z |
---
base_model: allenai/scibert_scivocab_cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: scibert_ner_drugname
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_ner_drugname
This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1243
- Precision: 0.7631
- Recall: 0.8520
- F1: 0.8051
- Accuracy: 0.9722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0733 | 1.0 | 120 | 0.1176 | 0.6466 | 0.7713 | 0.7035 | 0.9583 |
| 0.0069 | 2.0 | 240 | 0.1126 | 0.6757 | 0.7848 | 0.7261 | 0.9654 |
| 0.0521 | 3.0 | 360 | 0.0949 | 0.7461 | 0.8565 | 0.7975 | 0.9707 |
| 0.0217 | 4.0 | 480 | 0.0972 | 0.7171 | 0.8296 | 0.7692 | 0.9718 |
| 0.001 | 5.0 | 600 | 0.1111 | 0.7422 | 0.8520 | 0.7933 | 0.9707 |
| 0.0044 | 6.0 | 720 | 0.1138 | 0.7664 | 0.8386 | 0.8009 | 0.9715 |
| 0.0011 | 7.0 | 840 | 0.1155 | 0.7449 | 0.8251 | 0.7830 | 0.9699 |
| 0.0006 | 8.0 | 960 | 0.1213 | 0.7344 | 0.8430 | 0.7850 | 0.9716 |
| 0.0289 | 9.0 | 1080 | 0.1238 | 0.7661 | 0.8520 | 0.8068 | 0.9718 |
| 0.0096 | 10.0 | 1200 | 0.1243 | 0.7631 | 0.8520 | 0.8051 | 0.9722 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
nbeerbower/bruphin-iota
|
nbeerbower
| 2024-03-24T17:33:16Z | 49 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:nbeerbower/bruphin-theta",
"base_model:merge:nbeerbower/bruphin-theta",
"base_model:pabloce/Dolphin-2.8-slerp",
"base_model:merge:pabloce/Dolphin-2.8-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T17:26:15Z |
---
license: apache-2.0
base_model:
- nbeerbower/bruphin-theta
- pabloce/Dolphin-2.8-slerp
library_name: transformers
tags:
- mergekit
- merge
---
# bruphin-iota
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/bruphin-theta](https://huggingface.co/nbeerbower/bruphin-theta)
* [pabloce/Dolphin-2.8-slerp](https://huggingface.co/pabloce/Dolphin-2.8-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/bruphin-theta
layer_range: [0, 32]
- model: pabloce/Dolphin-2.8-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: pabloce/Dolphin-2.8-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
JeffreyJIANG/bert-imdb
|
JeffreyJIANG
| 2024-03-24T17:32:41Z | 113 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-24T15:40:33Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93956
- name: F1
type: f1
value: 0.9395537111681099
- name: Precision
type: precision
value: 0.939743003448315
- name: Recall
type: recall
value: 0.93956
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2266
- Accuracy: 0.9396
- F1: 0.9396
- Precision: 0.9397
- Recall: 0.9396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 9072
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2223 | 1.0 | 1563 | 0.1898 | 0.9328 | 0.9327 | 0.9331 | 0.9328 |
| 0.1161 | 2.0 | 3126 | 0.2266 | 0.9396 | 0.9396 | 0.9397 | 0.9396 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
JFernandoGRE/mistral_7b_augmenteddemocracy_dups_all1_25
|
JFernandoGRE
| 2024-03-24T17:32:21Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-24T17:28:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ricigl/bert-base-portuguese-cased-finetuned-RM-3
|
ricigl
| 2024-03-24T17:26:41Z | 162 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-24T17:26:16Z |
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-portuguese-cased-finetuned-RM-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased-finetuned-RM-3
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 20 | 4.1937 |
| No log | 2.0 | 40 | 3.8830 |
| No log | 3.0 | 60 | 3.9086 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
probabl-ai/ScikitLLM-Model
|
probabl-ai
| 2024-03-24T17:07:11Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T09:58:15Z |
***ScikitLLM*** is an LLM finetuned on writing references and code for the Scikit-Learn documentation.
Features of ScikitLLM includes:
* Support for RAG (three chunks)
* Sources and quotations using a modified version of the wiki syntax ("<ref>")
* Code samples and examples based on the code quoted in the chunks.
* Expanded knowledge/familiarity with the Scikit-Learn concepts and documentation.
## Training
ScikitLLM is based on Mistral-OpenHermes 7B, a pre-existing finetune version of Mistral 7B. OpenHermes already include many desired capacities for the end use, including instruction tuning, source analysis, and native support for the chatML syntax.
As a fine-tune of a fine-tune, ScikitLLM has been trained with a lower learning rate than is commonly used in fine-tuning projects.
|
daniel0321forever/personalize-whiper-model
|
daniel0321forever
| 2024-03-24T17:02:32Z | 3 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:mozilla-foundation/fsc-audio-dataset",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-12T09:15:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: openai/whisper-small
datasets:
- mozilla-foundation/fsc-audio-dataset
metrics:
- wer
model-index:
- name: Personalized Whisper Small - Wei Fang
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: fsc-audio-dataset
type: mozilla-foundation/fsc-audio-dataset
metrics:
- type: wer
value: 8.372290692732681
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Personalized Whisper Small - Wei Fang
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the fsc-audio-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2946
- Wer: 8.3723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.9814 | 0.32 | 100 | 0.8164 | 13.2172 |
| 0.3013 | 0.64 | 200 | 0.2578 | 11.7722 |
| 0.2074 | 0.96 | 300 | 0.2192 | 10.4972 |
| 0.1429 | 1.28 | 400 | 0.2245 | 11.0072 |
| 0.1565 | 1.6 | 500 | 0.2102 | 10.6247 |
| 0.1554 | 1.92 | 600 | 0.2137 | 11.2197 |
| 0.0684 | 2.24 | 700 | 0.2139 | 8.8823 |
| 0.0717 | 2.56 | 800 | 0.2142 | 9.6898 |
| 0.0795 | 2.88 | 900 | 0.2128 | 9.2223 |
| 0.0329 | 3.21 | 1000 | 0.2341 | 9.3073 |
| 0.03 | 3.53 | 1100 | 0.2324 | 8.9673 |
| 0.0319 | 3.85 | 1200 | 0.2365 | 9.0948 |
| 0.0137 | 4.17 | 1300 | 0.2403 | 9.0523 |
| 0.0145 | 4.49 | 1400 | 0.2470 | 8.3723 |
| 0.0145 | 4.81 | 1500 | 0.2596 | 9.4348 |
| 0.0067 | 5.13 | 1600 | 0.2544 | 8.9248 |
| 0.0088 | 5.45 | 1700 | 0.2553 | 8.4573 |
| 0.0065 | 5.77 | 1800 | 0.2729 | 8.8823 |
| 0.0018 | 6.09 | 1900 | 0.2680 | 8.7973 |
| 0.0023 | 6.41 | 2000 | 0.2710 | 9.0948 |
| 0.0018 | 6.73 | 2100 | 0.2762 | 8.8398 |
| 0.002 | 7.05 | 2200 | 0.2717 | 8.5848 |
| 0.0011 | 7.37 | 2300 | 0.2784 | 8.5423 |
| 0.0012 | 7.69 | 2400 | 0.2797 | 8.4573 |
| 0.0011 | 8.01 | 2500 | 0.2782 | 8.3723 |
| 0.0007 | 8.33 | 2600 | 0.2838 | 8.1598 |
| 0.0007 | 8.65 | 2700 | 0.2826 | 8.2448 |
| 0.0013 | 8.97 | 2800 | 0.2835 | 8.4148 |
| 0.0006 | 9.29 | 2900 | 0.2913 | 8.2448 |
| 0.0006 | 9.62 | 3000 | 0.2906 | 8.4148 |
| 0.001 | 9.94 | 3100 | 0.2886 | 8.6273 |
| 0.0005 | 10.26 | 3200 | 0.2890 | 8.3723 |
| 0.0005 | 10.58 | 3300 | 0.2905 | 8.3723 |
| 0.0005 | 10.9 | 3400 | 0.2917 | 8.4573 |
| 0.0008 | 11.22 | 3500 | 0.2927 | 8.3723 |
| 0.0019 | 11.54 | 3600 | 0.2932 | 8.3723 |
| 0.0004 | 11.86 | 3700 | 0.2939 | 8.3723 |
| 0.0004 | 12.18 | 3800 | 0.2941 | 8.3723 |
| 0.0005 | 12.5 | 3900 | 0.2944 | 8.3723 |
| 0.0005 | 12.82 | 4000 | 0.2946 | 8.3723 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ChavyvAkvar/tutorial-example
|
ChavyvAkvar
| 2024-03-24T16:57:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-23T17:30:55Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mmnga/napopoa32-swallow-hermes-st-v1-gguf
|
mmnga
| 2024-03-24T16:56:40Z | 138 | 0 | null |
[
"gguf",
"mistral",
"en",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-24T15:37:24Z |
---
license: apache-2.0
language:
- en
- ja
tags:
- mistral
---
# napopoa32-swallow-hermes-st-v1-gguf
[napopoa32さんが公開しているswallow-hermes-st-v1](https://huggingface.co/napopoa32/swallow-hermes-st-v1)のggufフォーマット変換版です。
こちらはベースモデルになります。
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'napopoa32-swallow-hermes-st-v1-q4_0.gguf' -p "<|im_start|>system\nYou are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>\n<|im_start|>user\n面白い青春の物語を書いてください。<|im_end|>\n<|im_start|>assistant" -n 128
```
|
bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF
|
bartowski
| 2024-03-24T16:54:33Z | 88 | 2 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"en",
"dataset:Locutusque/hyperion-dpo-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T16:40:55Z |
---
library_name: transformers
license: apache-2.0
language:
- en
widget:
- example_title: Reasoning
text: >
<|im_start|>system
You are Hyperion, an AI assistant developed by Sebastian Gabarain with a
focus in reasoning, coding, science, and other general-purpose subjects. You
are an AI assistant that follows instruction extremely well. Help as much as
you can.
<|im_end|>
<|im_start|>user
In this task, you are given a description of an object and its surroundings.
Your
task is to describe the motion of the object mentioned in the description
using
differential equations. The answer should be a mathematical expression in t.
Description:
A ball is thrown up at 30 ft/sec from 3 feet off the ground. Solve this
step-by-step,
explaining each step in detail.
<|im_end|>
<|im_start|>assistant
output:
text: >
Hyperion here. The task requires you to describe the motion of a ball
thrown up using differential equations. Here's the detailed explanation of
how to solve it:
1. Identify the given information:
- Initial velocity (v0): 30 ft/sec
- Initial height (y0): 3 feet off the ground
2. Determine the acceleration due to gravity (g) at sea level, which is
approximately -32.2 ft/s^2. Since we're dealing with a short distance and
not very high altitudes, we can consider this value of g as constant.
3. Use Newton's second law of motion to find the differential equation
that describes the motion:
- The net force acting on the ball is the sum of gravity (mg) and air resistance (which we will neglect for simplicity). So, Fnet = mg.
- Since acceleration equals force divided by mass, a = Fnet/m. In this case, m cancels out, so a = g.
- The motion equation becomes: dv/dt = g = -32.2 ft/s^2 (since we're moving upwards). This is the first differential equation.
4. Since we want to describe the ball's position (y) as well, we need
another differential equation. We can use kinematics equations:
- The velocity equation is v = v0 + at. Substitute the value of a from step 3:
dv/dt = -32.2 ft/s^2. Rewriting it as an integral:
∫dv = -32.2 ft/s^2 * dt
- Integrating with respect to time, we get v(t) = -16.1 t + C (constant of integration).
5. Now we have two equations, one for velocity and another for position:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + C (equation 2)
6. We need to find the constant C using the initial conditions given in
the problem. We know that initially, at time t=0, the ball's velocity is
v0:
- Equation 2 becomes v0 = -16.1 * 0 + C, which simplifies to C = v0 = 30 ft/sec (initial velocity).
7. Now we have both equations with the constants filled in:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + 30 ft/sec (equation 2)
8. Finally, we need to find the ball's position (y) as a function of time:
- Use the second kinematics equation y = y0 + v0*t + ½*a*t^2 (since acceleration is constant, it can be moved outside the integral).
- Substitute the values for v0 and a:
y(t) = 3 ft + (30 ft/sec) * t + (-½ * 32.2 ft/s^2) * t^2
That's it! The differential equation describing the motion of the ball is:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + 30 ft/sec (equation 2)
- y(t) = 3 ft + (30 ft/sec) * t - (16.1 t^2) ft (equation 3)
datasets:
- Locutusque/hyperion-dpo-v1.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of Hyperion-3.0-Mistral-7B-DPO
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization.
Original model: https://huggingface.co/Locutusque/Hyperion-3.0-Mistral-7B-DPO
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Hyperion-3.0-Mistral-7B-DPO-Q8_0.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
| [Hyperion-3.0-Mistral-7B-DPO-Q6_K.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
| [Hyperion-3.0-Mistral-7B-DPO-Q5_K_M.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
| [Hyperion-3.0-Mistral-7B-DPO-Q5_K_S.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
| [Hyperion-3.0-Mistral-7B-DPO-Q5_0.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
| [Hyperion-3.0-Mistral-7B-DPO-Q4_K_M.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
| [Hyperion-3.0-Mistral-7B-DPO-Q4_K_S.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
| [Hyperion-3.0-Mistral-7B-DPO-IQ4_NL.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Good quality, similar to Q4_K_S, new method of quanting, |
| [Hyperion-3.0-Mistral-7B-DPO-IQ4_XS.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. |
| [Hyperion-3.0-Mistral-7B-DPO-Q4_0.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
| [Hyperion-3.0-Mistral-7B-DPO-IQ3_M.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. |
| [Hyperion-3.0-Mistral-7B-DPO-IQ3_S.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
| [Hyperion-3.0-Mistral-7B-DPO-Q3_K_L.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
| [Hyperion-3.0-Mistral-7B-DPO-Q3_K_M.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
| [Hyperion-3.0-Mistral-7B-DPO-Q3_K_S.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
| [Hyperion-3.0-Mistral-7B-DPO-Q2_K.gguf](https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Hyperion-3.0-Mistral-7B-DPO-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
dchatca/vistral_final_v4.3
|
dchatca
| 2024-03-24T16:51:37Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-24T16:41:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.