modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 06:27:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 542
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 06:26:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kitty528/Lyric-generator
|
kitty528
| 2024-03-01T10:22:20Z | 175 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T22:40:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sajeepan/Gath_mistral_7b
|
Sajeepan
| 2024-03-01T10:21:42Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T10:07:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luaqi/sn9_17
|
luaqi
| 2024-03-01T10:18:37Z | 173 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T10:17:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SimplCup/BushyV2
|
SimplCup
| 2024-03-01T10:18:23Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2024-03-01T10:18:07Z |
---
license: cc-by-nc-nd-4.0
---
|
AbelKidane/QwenChat4AWQ_allD
|
AbelKidane
| 2024-03-01T10:12:49Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"autotrain_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-01T08:48:17Z |
# Code Link
https://colab.research.google.com/drive/1bzf2RPML-YVVGrJhBdr3MY1DY6GTphJg?usp=sharing
|
MesozoicMetallurgist/nous-Aalenian
|
MesozoicMetallurgist
| 2024-03-01T10:11:58Z | 119 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T10:02:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
freshpearYoon/v3_free_all_re
|
freshpearYoon
| 2024-03-01T10:11:55Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-01T01:52:34Z |
---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
base_model: openai/whisper-large-v3
metrics:
- wer
model-index:
- name: whisper_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_finetune
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the aihub_100000 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4933
- Cer: 6.9924
- Wer: 28.6257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6435 | 0.14 | 1000 | 0.6061 | 7.0810 | 29.1317 |
| 0.515 | 0.28 | 2000 | 0.4933 | 6.9924 | 28.6257 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.17.1
- Tokenizers 0.15.2
|
huggingface-hub-ci/test-model-on-the-fly-4e7276df-4364-4166-abbe-10523a05bc96
|
huggingface-hub-ci
| 2024-03-01T10:08:10Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-01T10:08:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aaronzhao/tiny-llama
|
aaronzhao
| 2024-03-01T10:05:49Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T10:05:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
justdimaa/ppo-LunarLander-v2
|
justdimaa
| 2024-03-01T10:03:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-01T10:03:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.48 +/- 14.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SidXXD/aiti_db-real_dog
|
SidXXD
| 2024-03-01T09:57:48Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:finetune:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-01T09:50:22Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - SidXXD/aiti_db-real_dog
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
SidXXD/aiti_db-real_person
|
SidXXD
| 2024-03-01T09:53:17Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:finetune:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-01T09:45:04Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - SidXXD/aiti_db-real_person
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Tippawan/medical_translation_v.1
|
Tippawan
| 2024-03-01T09:52:32Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T09:29:28Z |
---
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: fb-nllb-2d6m-Bilibili_MIX_1350K-en-th-70k-clean-punct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fb-nllb-2d6m-Bilibili_MIX_1350K-en-th-70k-clean-punct
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2449
- Bleu: 41.9443
- Gen Len: 31.5709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.129 | 1.0 | 144 | 1.2976 | 40.8925 | 31.6055 |
| 1.1063 | 2.0 | 288 | 1.2713 | 40.2012 | 31.346 |
| 0.9238 | 3.0 | 432 | 1.2449 | 41.9443 | 31.5709 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Lewdiculous/Prima-LelantaclesV6-7b-GGUF-IQ-Imatrix
|
Lewdiculous
| 2024-03-01T09:50:02Z | 41 | 5 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"text-generation-inference",
"rp",
"roleplay",
"uncensored",
"text-generation",
"en",
"arxiv:2311.03099",
"arxiv:2306.01708",
"region:us"
] |
text-generation
| 2024-03-01T08:14:54Z |
---
library_name: transformers
tags:
- mistral
- quantized
- text-generation-inference
- rp
- roleplay
- uncensored
pipeline_tag: text-generation
inference: false
language:
- en
---
# **GGUF-Imatrix quantizations for [Test157t/Prima-LelantaclesV6-7b](https://huggingface.co/Test157t/Prima-LelantaclesV6-7b/).**
# What does "Imatrix" mean?
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance.
One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse.
More information: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
For --imatrix data, `imatrix-Prima-LelantaclesV6-7b-F16.dat` was used.
`Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)`
Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2294](https://github.com/ggerganov/llama.cpp/releases/tag/b2294).
The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher.
*If you want any specific quantization to be added, feel free to ask.*
All credits belong to the [creator](https://huggingface.co/Test157t/).
# Original model information:

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708)
### Models Merged
The following models were included in the merge:
* [Test157t/West-Pasta-Lake-7b](https://huggingface.co/Test157t/West-Pasta-Lake-7b) + [Test157t/Lelantacles6-Experiment26-7B](https://huggingface.co/Test157t/Lelantacles6-Experiment26-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: Test157t/Lelantacles6-Experiment26-7B
parameters:
normalize: true
models:
- model: Test157t/West-Pasta-Lake-7b
parameters:
weight: 1
- model: Test157t/Lelantacles6-Experiment26-7B
parameters:
weight: 1
dtype: float16
```
|
llmware/bling-stablelm-3b-gguf
|
llmware
| 2024-03-01T09:46:47Z | 26 | 6 |
transformers
|
[
"transformers",
"gguf",
"stablelm_epoch",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T09:25:09Z |
---
license: cc-by-sa-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
**bling-stablelm-3b-gguf** is a quantized version of BLING Stable-LM 3B, with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.
[**bling-stablelm-3b**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0) is a fact-based question-answering model, optimized for complex business documents.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/bling-stablelm-3b-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
model = ModelCatalog().load_model("llmware/bling-stablelm-3b-gguf")
response = model.inference(query, add_context=text_sample, add_prompt_engineering="default_with_context")
Note: please review [**config.json**](https://huggingface.co/llmware/bling-stablelm-3b-gguf/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** GGUF
- **Language(s) (NLP):** English
- **License:** CC-BY-SA-4.0
- **Quantized from model:** [llmware/bling-stablelm-3b](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0/)
## Model Card Contact
Darren Oberst & llmware team
|
PaulCafe/ddpm-celebahq-finetuned-butterflies-2epochs
|
PaulCafe
| 2024-03-01T09:45:57Z | 46 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-03-01T09:45:40Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('PaulCafe/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
lysandre/bert-test
|
lysandre
| 2024-03-01T09:39:30Z | 178 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-01T08:53:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chu-chu/Reinforce-Cartpole-v1
|
chu-chu
| 2024-03-01T09:35:37Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-01T09:35:28Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ThomasEgense/andreas_model15_again2
|
ThomasEgense
| 2024-03-01T09:30:34Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-01T07:53:36Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of andreasegense person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - ThomasEgense/andreas_model15_again2
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of andreasegense person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
ExusBurn/Huggy
|
ExusBurn
| 2024-03-01T09:24:53Z | 1 | 1 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-03-01T09:21:45Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ExusBurn/Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MoritzJost/ppo-Huggy
|
MoritzJost
| 2024-03-01T09:21:47Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-03-01T09:21:41Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MoritzJost/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
StaAhmed/Values_QA
|
StaAhmed
| 2024-03-01T09:19:13Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:StaAhmed/my_awesome_qa_model",
"base_model:finetune:StaAhmed/my_awesome_qa_model",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-29T10:14:06Z |
---
license: apache-2.0
base_model: StaAhmed/my_awesome_qa_model
tags:
- generated_from_trainer
model-index:
- name: Values_QA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Values_QA
This model is a fine-tuned version of [StaAhmed/my_awesome_qa_model](https://huggingface.co/StaAhmed/my_awesome_qa_model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 2.6667 |
| No log | 2.0 | 18 | 2.3185 |
| No log | 3.0 | 27 | 1.9446 |
| No log | 4.0 | 36 | 1.7909 |
| No log | 5.0 | 45 | 1.7953 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ekato/SatoruIguchi
|
ekato
| 2024-03-01T09:13:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
] |
text-to-image
| 2024-03-01T09:13:47Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/1000018151.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: openrail
---
# Satoru Iguchi (King Gnu)
<Gallery />
## Download model
[Download](/ekato/SatoruIguchi/tree/main) them in the Files & versions tab.
|
pangjh3/LLM4MT-zh2en-10k
|
pangjh3
| 2024-03-01T08:59:59Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T07:36:36Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama2-sfton-10000-bitexts-and-alpacagpt4-and-newstests17to20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-sfton-10000-bitexts-and-alpacagpt4-and-newstests17to20
This model is a fine-tuned version of [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 2
- seed: 34
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 1536
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.14.6
- Tokenizers 0.13.3
|
diana9m/whisper_tiny_dk_23.02
|
diana9m
| 2024-03-01T08:57:30Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-23T13:58:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mogmyij/yelp-model-6k-batch_size8
|
mogmyij
| 2024-03-01T08:53:42Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-01T08:52:57Z |
---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: yelp-model-6k-batch_size8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp-model-6k-batch_size8
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9301
- Accuracy: 0.618
- F1: 0.6181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1224 | 1.0 | 750 | 0.9045 | 0.589 | 0.5905 |
| 0.8153 | 2.0 | 1500 | 0.8807 | 0.609 | 0.6100 |
| 0.6604 | 3.0 | 2250 | 0.9192 | 0.618 | 0.6191 |
| 0.5564 | 4.0 | 3000 | 0.9301 | 0.618 | 0.6181 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
yechenzhi1/Pyramids-Training
|
yechenzhi1
| 2024-03-01T08:52:38Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-03-01T08:52:35Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: yechenzhi1/Pyramids-Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
johannhartmann/Brezn-7B-WIP
|
johannhartmann
| 2024-03-01T08:51:15Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"mayflowergmbh/Wiedervereinigung-7b-dpo-laser",
"cognitivecomputations/openchat-3.5-0106-laser",
"conversational",
"base_model:PetroGPT/WestSeverus-7B-DPO-v2",
"base_model:merge:PetroGPT/WestSeverus-7B-DPO-v2",
"base_model:cognitivecomputations/openchat-3.5-0106-laser",
"base_model:merge:cognitivecomputations/openchat-3.5-0106-laser",
"base_model:mayflowergmbh/Wiedervereinigung-7b-dpo-laser",
"base_model:merge:mayflowergmbh/Wiedervereinigung-7b-dpo-laser",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T16:45:39Z |
---
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- mayflowergmbh/Wiedervereinigung-7b-dpo-laser
- cognitivecomputations/openchat-3.5-0106-laser
base_model:
- FelixChao/WestSeverus-7B-DPO-v2
- mayflowergmbh/Wiedervereinigung-7b-dpo-laser
- cognitivecomputations/openchat-3.5-0106-laser
---
# Brezn-7B
Brezn-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
* [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.60
weight: 0.30
- model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
parameters:
density: 0.65
weight: 0.40
- model: cognitivecomputations/openchat-3.5-0106-laser
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "johannhartmann/Brezn-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
nmquang112/LeikfazSDXL1
|
nmquang112
| 2024-03-01T08:50:54Z | 2 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-02-29T20:07:47Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: by LeikfazAI,nsfw,anime
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
koesn/CarbonBeagle-11B-truthy-GGUF
|
koesn
| 2024-03-01T08:49:19Z | 51 | 3 | null |
[
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-29T02:11:36Z |
---
license: apache-2.0
language:
- en
---
# CarbonBeagle-11B-truthy-GGUF
## Description
This repo contains GGUF format model files for CarbonBeagle-11B-truthy-GGUF.
## Files Provided
| Name | Quant | Bits | File Size | Remark |
| ------------------------------------ | ------- | ---- | --------- | -------------------------------- |
| carbonbeagle-11b-truthy.IQ3_XXS.gguf | IQ3_XXS | 3 | 4.44 GB | 3.06 bpw quantization |
| carbonbeagle-11b-truthy.IQ3_S.gguf | IQ3_S | 3 | 4.69 GB | 3.44 bpw quantization |
| carbonbeagle-11b-truthy.IQ3_M.gguf | IQ3_M | 3 | 4.85 GB | 3.66 bpw quantization mix |
| carbonbeagle-11b-truthy.Q4_0.gguf | Q4_0 | 4 | 6.07 GB | 3.56G, +0.2166 ppl |
| carbonbeagle-11b-truthy.IQ4_NL.gguf | IQ4_NL | 4 | 6.14 GB | 4.25 bpw non-linear quantization |
| carbonbeagle-11b-truthy.Q4_K_M.gguf | Q4_K_M | 4 | 6.46 GB | 3.80G, +0.0532 ppl |
| carbonbeagle-11b-truthy.Q5_K_M.gguf | Q5_K_M | 5 | 7.60 GB | 4.45G, +0.0122 ppl |
| carbonbeagle-11b-truthy.Q6_K.gguf | Q6_K | 6 | 8.81 GB | 5.15G, +0.0008 ppl |
| carbonbeagle-11b-truthy.Q8_0.gguf | Q8_0 | 8 | 11.40 GB | 6.70G, +0.0004 ppl |
## Parameters
| path | type | architecture | rope_theta | sliding_win | max_pos_embed |
| -------------------------------- | ------- | ------------------ | ---------- | ----------- | ------------- |
| vicgalle/CarbonBeagle-11B-truthy | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 |
## Benchmarks

# Original Model Card
No info.
|
Sumail/Optimus_Prime_zHen01_2b
|
Sumail
| 2024-03-01T08:47:54Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"0x0dad0/nous_nb_01",
"0x0dad0/nous_nb_02",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T08:45:30Z |
---
tags:
- merge
- mergekit
- lazymergekit
- 0x0dad0/nous_nb_01
- 0x0dad0/nous_nb_02
base_model:
- 0x0dad0/nous_nb_01
- 0x0dad0/nous_nb_02
---
# Optimus_Prime_zHen01_2b
Optimus_Prime_zHen01_2b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [0x0dad0/nous_nb_01](https://huggingface.co/0x0dad0/nous_nb_01)
* [0x0dad0/nous_nb_02](https://huggingface.co/0x0dad0/nous_nb_02)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: 0x0dad0/nous_nb_01
layer_range: [0, 18]
- model: 0x0dad0/nous_nb_02
layer_range: [0, 18]
merge_method: slerp
base_model: 0x0dad0/nous_nb_02
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.95
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Sumail/Optimus_Prime_zHen01_2b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
matteo1997/bus_sdxl_controlnet
|
matteo1997
| 2024-03-01T08:40:53Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-03-01T07:53:29Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-matteo1997/bus_sdxl_controlnet
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
|
Sumail/Optimus_Prime_zHen_2b
|
Sumail
| 2024-03-01T08:37:39Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"0x0dad0/nous_nb_01",
"0x0dad0/nous_nb_02",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T08:35:12Z |
---
tags:
- merge
- mergekit
- lazymergekit
- 0x0dad0/nous_nb_01
- 0x0dad0/nous_nb_02
base_model:
- 0x0dad0/nous_nb_01
- 0x0dad0/nous_nb_02
---
# Optimus_Prime_zHen_2b
Optimus_Prime_zHen_2b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [0x0dad0/nous_nb_01](https://huggingface.co/0x0dad0/nous_nb_01)
* [0x0dad0/nous_nb_02](https://huggingface.co/0x0dad0/nous_nb_02)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: 0x0dad0/nous_nb_01
layer_range: [0, 18]
- model: 0x0dad0/nous_nb_02
layer_range: [0, 18]
merge_method: slerp
base_model: 0x0dad0/nous_nb_02
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Sumail/Optimus_Prime_zHen_2b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Abdelkareem/bert-large-arabertv02-arabic-finetuned-emotion
|
Abdelkareem
| 2024-03-01T08:34:10Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-large-arabertv02",
"base_model:finetune:aubmindlab/bert-large-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-28T08:21:17Z |
---
base_model: aubmindlab/bert-large-arabertv02
tags:
- generated_from_trainer
model-index:
- name: bert-large-arabertv02-arabic-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-arabertv02-arabic-finetuned-emotion
This model is a fine-tuned version of [aubmindlab/bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1038 | 1.0 | 1762 | 1.9341 |
| 1.1341 | 2.0 | 3524 | 0.8762 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Kimty/final_test
|
Kimty
| 2024-03-01T08:32:52Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-01T08:28:57Z |
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: final_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_test
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
prateeky2806/retromagzine-lora-merging
|
prateeky2806
| 2024-03-01T08:28:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-01T08:28:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prateeky2806/cybertech-lora-merging
|
prateeky2806
| 2024-03-01T08:27:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-01T08:27:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hahmadraz/sepformer-libri3mix-48k-epoch93_latest
|
hahmadraz
| 2024-03-01T08:27:49Z | 1 | 0 |
speechbrain
|
[
"speechbrain",
"Source Separation",
"Speech Separation",
"Audio Source Separation",
"Libri3Mix",
"SepFormer",
"Transformer",
"audio-to-audio",
"audio-source-separation",
"en",
"dataset:Libri3Mix",
"arxiv:2010.13154",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] |
audio-to-audio
| 2024-03-01T08:18:06Z |
---
language: "en"
thumbnail:
tags:
- Source Separation
- Speech Separation
- Audio Source Separation
- Libri3Mix
- SepFormer
- Transformer
- audio-to-audio
- audio-source-separation
- speechbrain
license: "apache-2.0"
datasets:
- Libri3Mix
metrics:
- SI-SNRi
- SDRi
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# SepFormer trained on Libri3Mix
This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2)
model, implemented with SpeechBrain, and pretrained on Libri3Mix dataset. For a better experience we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| Release | Train-Set SI-SNRi | Test-Set SI-SNRi |
|:-------------:|:--------------:|:--------------:|
| 16-09-22 | 9.58 dB | 5.75 dB |
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform source separation on your own audio file
```python
from speechbrain.pretrained import SepformerSeparation as separator
import torchaudio
model = separator.from_hparams(source="speechbrain/sepformer-libri3mix", savedir='pretrained_models/sepformer-libri3mix')
est_sources = model.separate_file(path='speechbrain/sepformer-wsj03mix/test_mixture_3spks.wav')
torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000)
torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)
torchaudio.save("source3hat.wav", est_sources[:, :, 2].detach().cpu(), 8000)
```
The system expects input recordings sampled at 8kHz (single channel).
If your signal has a different sample rate, resample it (e.g, using torchaudio or sox) before using the interface.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (fc2eabb7).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/LibriMix/separation
python train.py hparams/sepformer.yaml --data_folder=your_data_folder
```
Note: change num_spks to 3 in the yaml file.
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1DN49LtAs6cq1X0jZ8tRMlh2Pj6AecClz).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
#### Referencing SepFormer
```bibtex
@inproceedings{subakan2021attention,
title={Attention is All You Need in Speech Separation},
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
year={2021},
booktitle={ICASSP 2021}
}
@misc{subakan2022sepformer
author = {Subakan, Cem and Ravanelli, Mirco and Cornell, Samuele and Grondin, Francois and Bronzi, Mirko},
title = {On Using Transformers for Speech-Separation},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
|
saishf/Fett-uccine-11B-Experiment
|
saishf
| 2024-03-01T08:23:48Z | 50 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Epiculous/Fett-uccine-7B",
"base_model:finetune:Epiculous/Fett-uccine-7B",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-28T14:49:46Z |
---
base_model:
- Epiculous/Fett-uccine-7B
library_name: transformers
tags:
- mergekit
- merge
license: agpl-3.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
**Scores lower than the original model! Not recommended unless you're experimenting.**
This model is a experiment of using passthrough on 7b models to further merge them with 10.7b/11b models for fun. i doubt there will be any benfits of this model over the orignal.
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Epiculous/Fett-uccine-7B](https://huggingface.co/Epiculous/Fett-uccine-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Epiculous/Fett-uccine-7B
layer_range: [0, 24]
- sources:
- model: Epiculous/Fett-uccine-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
```
|
prateeky2806/pixel-lora-merging
|
prateeky2806
| 2024-03-01T08:23:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-01T08:23:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hahmadraz/sepformer-libri3mix-48k-epoch56_best
|
hahmadraz
| 2024-03-01T08:16:49Z | 1 | 0 |
speechbrain
|
[
"speechbrain",
"Source Separation",
"Speech Separation",
"Audio Source Separation",
"Libri3Mix",
"SepFormer",
"Transformer",
"audio-to-audio",
"audio-source-separation",
"en",
"dataset:Libri3Mix",
"arxiv:2010.13154",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] |
audio-to-audio
| 2024-03-01T08:07:48Z |
---
language: "en"
thumbnail:
tags:
- Source Separation
- Speech Separation
- Audio Source Separation
- Libri3Mix
- SepFormer
- Transformer
- audio-to-audio
- audio-source-separation
- speechbrain
license: "apache-2.0"
datasets:
- Libri3Mix
metrics:
- SI-SNRi
- SDRi
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# SepFormer trained on Libri3Mix
This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2)
model, implemented with SpeechBrain, and pretrained on Libri3Mix dataset. For a better experience we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| Release | Train-Set SI-SNRi | Test-Set SI-SNRi |
|:-------------:|:--------------:|:--------------:|
| 16-09-22 | 9.51 dB | 5.77 dB |
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform source separation on your own audio file
```python
from speechbrain.pretrained import SepformerSeparation as separator
import torchaudio
model = separator.from_hparams(source="speechbrain/sepformer-libri3mix", savedir='pretrained_models/sepformer-libri3mix')
est_sources = model.separate_file(path='speechbrain/sepformer-wsj03mix/test_mixture_3spks.wav')
torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000)
torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)
torchaudio.save("source3hat.wav", est_sources[:, :, 2].detach().cpu(), 8000)
```
The system expects input recordings sampled at 8kHz (single channel).
If your signal has a different sample rate, resample it (e.g, using torchaudio or sox) before using the interface.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (fc2eabb7).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/LibriMix/separation
python train.py hparams/sepformer.yaml --data_folder=your_data_folder
```
Note: change num_spks to 3 in the yaml file.
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1DN49LtAs6cq1X0jZ8tRMlh2Pj6AecClz).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
#### Referencing SepFormer
```bibtex
@inproceedings{subakan2021attention,
title={Attention is All You Need in Speech Separation},
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
year={2021},
booktitle={ICASSP 2021}
}
@misc{subakan2022sepformer
author = {Subakan, Cem and Ravanelli, Mirco and Cornell, Samuele and Grondin, Francois and Bronzi, Mirko},
title = {On Using Transformers for Speech-Separation},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
|
zysnathan/AGM_Model
|
zysnathan
| 2024-03-01T08:15:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-03-01T07:44:28Z |
AGM, referring to abnormal glucose metabolism, encompasses conditions such as diabetes and prediabetes.
|
Aharneish/merged_mistral
|
Aharneish
| 2024-03-01T08:13:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-01T08:13:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeiku/Mewthree_7B
|
jeiku
| 2024-03-01T08:12:19Z | 60 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Krisbiantoro/mistral7b_dpo_en",
"base_model:finetune:Krisbiantoro/mistral7b_dpo_en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T06:56:35Z |
---
base_model:
- Krisbiantoro/mistral7b_dpo_en
library_name: transformers
tags:
- mergekit
- merge
license: other
---
Mewthree

Draws upon the Prodigy lineage with some no robots tossed in for good measure. Dipped its toes in some memerboard essence and added a kiss of BioMistral for anatomy. Applied a DPO LoRA over top.
Seems to do markdown well. It's an overall balanced model with a focus on RP.
|
nilq/mistral-1L-tiny
|
nilq
| 2024-03-01T08:11:50Z | 274 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"arxiv:2305.07759",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-27T18:59:11Z |
---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: mistral-1L-tiny
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.5792084706530948
---
# mistral-1L-tiny
A tiny single-layer 35.1M parameter Mistral model, with a hidden size of 512, and an MLP intermediate size of 1024.
This model is trained on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6868
- Accuracy: 0.5792
## Model description
This work is inspired by the 21M parameter one-layer GPT-Neo of the [Tiny Stories paper](https://arxiv.org/abs/2305.07759).
Results reproduced to acquire high-frequency checkpoints for further analysis.
## Intended uses & limitations
Analysis of feature dynamics and emergence in real-world language models.
## Training procedure
Trained for 90171 steps, corresponding to ~2 hours on a single H100.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
Quite consistent English text generation.
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Namkoy/PhoWhisper_peft_vi
|
Namkoy
| 2024-03-01T08:11:49Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:vinai/PhoWhisper-large",
"base_model:adapter:vinai/PhoWhisper-large",
"license:apache-2.0",
"region:us"
] | null | 2024-03-01T07:37:04Z |
---
language:
- hi
license: apache-2.0
library_name: peft
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
base_model: vinai/PhoWhisper-large
model-index:
- name: whisper_vietnam_nam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_vietnam_nam
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
WER = 0.14091457396644369
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1426 | 1.0 | 174 | 0.2628 |
| 0.0871 | 2.0 | 348 | 0.2505 |
| 0.0356 | 3.0 | 522 | 0.2613 |
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
BloodJackson/autotrain-nnds7-fkzxh
|
BloodJackson
| 2024-03-01T08:03:21Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mpnet",
"text-classification",
"autotrain",
"dataset:autotrain-nnds7-fkzxh/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-01T08:03:01Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain-nnds7-fkzxh/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 2.672192096710205
f1_macro: 0.024121042190984706
f1_micro: 0.39683608212722987
f1_weighted: 0.2748118683830545
precision_macro: 0.019620089001105543
precision_micro: 0.39683608212722987
precision_weighted: 0.2243676430869127
recall_macro: 0.042532191885713035
recall_micro: 0.39683608212722987
recall_weighted: 0.39683608212722987
accuracy: 0.39683608212722987
|
Sumail/Optimus_Prime_Zheng04_2b
|
Sumail
| 2024-03-01T07:49:04Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergewss]",
"mergekit",
"lazymergekit",
"MesozoicMetallurgist/nous-Pliensbachia",
"MesozoicMetallurgist/nous-Toarcian",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T07:46:39Z |
---
license: apache-2.0
tags:
- mergewss]
- mergekit
- lazymergekit
- MesozoicMetallurgist/nous-Pliensbachia
- MesozoicMetallurgist/nous-Toarcian
---
# Optimus_Prime_Zheng04_2b
Optimus_Prime_Zheng04_2b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MesozoicMetallurgist/nous-Pliensbachia](https://huggingface.co/MesozoicMetallurgist/nous-Pliensbachia)
* [MesozoicMetallurgist/nous-Toarcian](https://huggingface.co/MesozoicMetallurgist/nous-Toarcian)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MesozoicMetallurgist/nous-Pliensbachia
layer_range: [0, 18]
- model: MesozoicMetallurgist/nous-Toarcian
layer_range: [0, 18]
merge_method: slerp
base_model: MesozoicMetallurgist/nous-Toarcian
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
teddy-f-47/phi-pl-400M-v_0_1
|
teddy-f-47
| 2024-03-01T07:45:31Z | 36 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"phi-msft",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-02T21:19:48Z |
---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-pl-v_0_1
results: []
---
# phi-1_5-pl-v_0_1
This model is based on [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5). It was trained from scratch on the 20231201 Polish Wikipedia dump.
## Model description
The model was trained for a context length of 1024 tokens. In addition, while the original model has a hidden size of 2048 (1.3B parameters), this model has a hidden size of 1024 (450.3M parameters).
The model used for training was as follows:
```
model_config = AutoConfig.from_pretrained(
'microsoft/phi-1_5', vocab_size=len(trained_tokenizer), max_position_embeddings=1024,
hidden_size=1024, attn_implementation="flash_attention_2", trust_remote_code=True
)
model = AutoModelForCausalLM.from_config(model_config, trust_remote_code=True)
```
## Intended uses & limitations
The model is intended for research purposes only. It may generate fictitious, incorrect, unethical, or biased texts. At its current state, it is not suitable for production purposes.
Example:
```
tokenizer = AutoTokenizer.from_pretrained(
model_name, trust_remote_code=True, use_fast=True
)
# to use flash_attention_2, may need to load the original microsoft phi-1.5 and load weights from this model
model = AutoModelForCausalLM.from_pretrained(
model_name, vocab_size=len(tokenizer), # attn_implementation="flash_attention_2",
trust_remote_code=True, torch_dtype=torch.bfloat16
).to(torch.device('cuda'))
model.eval()
generation_config = GenerationConfig.from_pretrained(
model_name, do_sample=False, repetition_penalty=1.5,
min_new_tokens=1, max_new_tokens=128
)
test_input = tokenizer("Wrocław to polski miasto. Wrocław jest ", return_tensors='pt').to(torch.device('cuda'))
test_output = model.generate(**test_input, generation_config=generation_config)
test_preds = tokenizer.batch_decode(sequences=test_output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(test_preds)
```
Output:
```
['Wrocław to polski miasto. Wrocław jest stolicą województwa dolnośląskiego, a także siedzibą władz powiatu wrocławskiego i gminy miejsko-wiejskiej Wrocław\n\nMiasto leży w południowo–zachodniej części Dolnego Śląska na Przedgórzu Sudeckim nad rzeką Odrą (odnoga Odry). Przez miasto przebiega droga krajowa nr 94 łącząca Berlin z Wrocławiem oraz linia kolejowa do Wrocławia Głównego przez Wrocław Główny – Kłodzko Główne/Szczecin Zachodni - Legnica. Miasto posiada połączenie kolejowe ze stacją kolejową Wrocław Gądów Mały lub Gądowem Małym poprzez węzeł kolejowy Wrocław Gądów Wielki. W mieście znajduje się stacja towarowa Wrocław Gądów Mały.\nW latach 1975−1998 miejscowość administracyjnie należała do woj. wałbrzyskiego. Od 1']
```
## Training and evaluation data
The 20231201 Polish Wikipedia dump.
## Training procedure
### Training environment
- GPU: 4 x RTX4090 (24GB per GPU, 96GB total)
- CPU: AMD EPYC 75F3 32-core (128 virtual cores)
- RAM: 258GB
- Motherboard: ROME2D32GM PCLe 4.0, 16x
- Storage: nvme 194.0GB
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- distributed_type: multi-GPU (DDP)
- num_devices: 4
- train_batch_size: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- precision: bf16
- seed: 42
### Training results
- runtime: 2d 21h 26m 36s
- train_loss: 2.727
Average results on the first 8,000 rows of the training data:
- rouge1: 0.25254847037792205
- rouge2: 0.16880333936214448
- rougeLsum: 0.24328783786296845
- cosine_similarity: 0.9603840799331665
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.14.7
- Tokenizers 0.15.0
|
fliarbi/email_phone_new2
|
fliarbi
| 2024-03-01T07:44:14Z | 161 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T07:10:02Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
model-index:
- name: email_phone_new2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# email_phone_new2
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
azizksar/train_llma_4bits
|
azizksar
| 2024-03-01T07:42:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T22:24:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
uni-zhuan/Reinforce-Cartpole-v1
|
uni-zhuan
| 2024-03-01T07:38:01Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-01T07:37:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 493.30 +/- 20.10
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Coletomyo/TomYo_voice_lora
|
Coletomyo
| 2024-03-01T07:30:03Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-01T03:56:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DoHwan9672/distilbert-base-uncased-finetuned-clinc
|
DoHwan9672
| 2024-03-01T07:27:11Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-01T02:35:36Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
amazon/FalconLite2
|
amazon
| 2024-03-01T07:26:35Z | 40 | 49 |
transformers
|
[
"transformers",
"RefinedWeb",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-10-04T06:53:57Z |
---
license: apache-2.0
inference: false
---
# FalconLite2 Model
FalconLit2 is a fine-tuned and quantized [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) language model, capable of processing long (up to 24K tokens) input sequences. By utilizing 4-bit [GPTQ quantization](https://github.com/PanQiWei/AutoGPTQ) and adapted RotaryEmbedding, FalconLite2 is able to process 10x longer contexts while consuming 4x less GPU memory than the original model. FalconLite2 is useful for applications such as topic retrieval, summarization, and question-answering. FalconLite2 can be deployed on a single AWS `g5.12x` instance with [TGI 1.0.3](https://github.com/huggingface/text-generation-inference/tree/v1.0.3) and [TGI 1.1.0](https://github.com/huggingface/text-generation-inference/tree/v1.1.0), making it suitable for applications that require high performance in resource-constrained environments. You can also deploy FalconLite2 directly on SageMaker endpoints.
FalconLite2 evolves from [FalconLite](https://huggingface.co/amazon/FalconLite), and their similarities and differences are summarized below:
|Model|Fine-tuned on long contexts| Quantization | Max context length| RotaryEmbedding adaptation| Inference framework|
|----------|-------------:|-------------:|------------:|-----------:|-----------:|
| FalconLite | No | 4-bit GPTQ |12K | [dNTK](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/) | TGI 0.9.2 |
| FalconLite2 | Yes | 4-bit GPTQ |24K | rope_theta = 1000000 | TGI 1.0.3 & 1.1.0 |
## Model Details
- **Developed by:** [AWS Contributors](https://github.com/orgs/aws-samples/teams/aws-prototype-ml-apac)
- **Model type:** [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b)
- **Language:** English
- **Finetuned from weights:** [Falcon 40B SFT OASST-TOP1 model](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
- **Finetuned on data:**
- [SLidingEncoder and Decoder (SLED)](https://huggingface.co/datasets/tau/sled)
- Multi-passage QA from [Natural Questions](https://github.com/google-research-datasets/natural-questions)
- [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- **Served using framework:** [Text-Generation-Inference 1.0.3](https://github.com/huggingface/text-generation-inference/tree/v1.0.3)
- **Model License:** Apache 2.0
- **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues)
## Deploy FalconLite2 on EC2 ##
SSH login to an AWS `g5.12x` instance with the [Deep Learning AMI](https://aws.amazon.com/releasenotes/aws-deep-learning-ami-gpu-pytorch-2-0-ubuntu-20-04/).
### Start TGI server-1.0.3
```bash
git clone https://github.com/awslabs/extending-the-context-length-of-open-source-llms.git falconlite-dev
cd falconlite-dev/falconlite2
# this may take a while to build updated vLLM CUDA kernels
./docker_build.sh
./start_falconlite.sh
```
### Start TGI server-1.1.0
```bash
git clone https://github.com/awslabs/extending-the-context-length-of-open-source-llms.git falconlite-dev
cd falconlite-dev/falconlite2-tgi1.1.0
# this may take a while to build updated vLLM CUDA kernels
./docker_build_rebuild_vllm_rope-theta.sh
./start_falconlite.sh
```
### Perform inference
```bash
# after FalconLite has been completely started
pip install -r ../script/requirements-client.txt
# test short context
python falconlite_client.py
# test long context of 13400 tokens,
# which are copied from [Amazon Aurora FAQs](https://aws.amazon.com/rds/aurora/faqs/)
python falconlite_client.py -l
```
**Important** - Use the prompt template below for FalconLite2:
```
<|prompter|>What are the main challenges to support a long context for LLM?<|endoftext|><|assistant|>
```
**Important** - When using FalconLite2 for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed.
## Deploy FalconLite2 on Amazon SageMaker ##
To deploy FalconLite2 on a SageMaker endpoint with TGI-1.0.3, please follow [this notebook](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/falconlite2/sm_deploy.ipynb) running on a SageMaker Notebook instance (e.g. `g5.xlarge`).
To deploy FalconLite2 on a SageMaker endpoint with TGI-1.1.0, please follow [this notebook](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/blob/main/falconlite2-tgi1.1.0/sm_deploy.ipynb) running on a SageMaker Notebook instance (e.g. `g5.xlarge`).
## Evalution Result ##
We evaluated FalconLite2 against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer contexts.
### Accuracy ###
|Eval task|Input length| Input length | Input length| Input length| Input length|
|----------|-------------:|-------------:|------------:|-----------:|-----------:|
| | 2851| 5568 |8313 | 11044 | 13780
| [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) | 100% | 100% | 100% | 100% | 90% |
|Eval task|Input length| Input length | Input length| Input length| Input length|Input length|
|----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:|
| | 3818| 5661 |7505 | 9354 | 11188 | 12657
| [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results) | 84% | 82% | 66% | 56% | 62% | 34% |
|Eval task|Input length| Input length | Input length| Input length|
|----------|-------------:|-------------:|------------:|-----------:|
| | 3264| 5396 |8329 | 10197 |
| [Pass key Retrieval](https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101) | 100% | 100% | 100% | 100% |
|Eval task| Test set Accuracy | Hard subset Accuracy|
|----------|-------------:|-------------:|
| [Question Answering with Long Input Texts](https://nyu-mll.github.io/quality/) | 53.4% | 45.4% |
## Limitations ##
Before using the FalconLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.
|
WangA/distilbert-base-finetuned-ocnli-chinese
|
WangA
| 2024-03-01T07:26:00Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-01T07:23:23Z |
---
license: apache-2.0
language:
- zh
---
## TextAttack Model Card
This `distilbert` model was fine-tuned using TextAttack. The model was fine-tuned
for 3 epochs with a batch size of 8,
a maximum sequence length of 512, and an initial learning rate of 3e-05.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.6830508474576271, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
WangA/albert-base-finetuned-ocnli-chinese
|
WangA
| 2024-03-01T07:22:05Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-01T07:17:51Z |
---
license: apache-2.0
language:
- zh
---
## TextAttack Model Card
This `albert` model was fine-tuned using TextAttack. The model was fine-tuned
for 3 epochs with a batch size of 8,
a maximum sequence length of 512, and an initial learning rate of 3e-05.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7091525423728814, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
sekhharr/detr_finetuned_v4_last_best_checkpoint
|
sekhharr
| 2024-03-01T07:17:16Z | 189 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-03-01T07:17:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sekhharr/detr_finetuned_v4_last_checkpoint
|
sekhharr
| 2024-03-01T07:17:05Z | 189 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-03-01T07:16:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
causet/q-FrozenLake-v1-4x4-noSlippery
|
causet
| 2024-03-01T07:11:58Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-01T07:08:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="causet/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dvijay/distilbert-base-uncased-finetuned-imdb
|
dvijay
| 2024-03-01T07:11:16Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-01T06:57:03Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6803 | 1.0 | 157 | 2.4968 |
| 2.5851 | 2.0 | 314 | 2.4487 |
| 2.5274 | 3.0 | 471 | 2.4833 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
aisuko/ft-segformer-with-sceneparse150
|
aisuko
| 2024-03-01T07:01:58Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-12-10T01:04:03Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: ft-segformer-with-sceneparse150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-segformer-with-sceneparse150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3884
- Mean Iou: 0.0259
- Mean Accuracy: 0.0547
- Overall Accuracy: 0.4491
- Per Category Iou: [0.3417101480816873, 0.4211288769068327, 0.7747555282866107, 0.3846204354053868, 0.33732378973954696, 0.041151599293209766, 0.46128131427542346, 0.11439788718514722, 0.12616558604979503, 0.18171159576156137, 0.17165912703264458, 0.06346386631243024, 0.11546430134541383, 0.0001487343415790393, 0.0013247427763715854, 0.0, 0.13274620610379087, 0.004944101773323053, 0.011655503401719319, 0.0, 0.0016660546838606434, 0.0, 0.035477393149597074, 0.0, 0.0, 4.4454718423813505e-06, 0.06028847248426353, 0.0, 0.0006802721088435374, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0005368298173512513, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0012449941795398464, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00042048608191068876, 0.0, 0.012187069195213215, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00016971877598818757, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.889338341519792e-05, 0.0, 0.0, 0.0, 0.0, 0.0029921675613834376, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00030254393296857133, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0]
- Per Category Accuracy: [0.5194495194945754, 0.8117698640339073, 0.9672763096625787, 0.820432246643049, 0.6843419269871048, 0.04300891893063284, 0.6009645810887155, 0.16730232665390735, 0.5003207343883315, 0.19801025930029267, 0.35768755152514425, 0.09390059524438853, 0.1317835995063082, 0.00014920000378920644, 0.0015635305528612998, 0.0, 0.2614850183183669, 0.009323204419889503, 0.04031575979701156, 0.0, 0.0017305272984988814, 0.0, 0.11935812364496419, 0.0, 0.0, 5.206923124986983e-06, 0.1372276664160497, 0.0, 0.0007093682075720508, 0.0, 0.0, nan, 0.0, 0.0, 0.001442127818781674, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0012546312652299554, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0005804672761573066, 0.0, 0.03457228301948799, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.00036832412523020257, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 6.113964294448521e-05, nan, 0.0, nan, nan, 0.006153289295086417, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0017814547540686615, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
fliarbi/email_phone_new
|
fliarbi
| 2024-03-01T06:45:56Z | 161 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T05:16:02Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
model-index:
- name: email_phone_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# email_phone_new
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
DAIEF/dqn-SpaceInvadersNoFrameskip-v4
|
DAIEF
| 2024-03-01T06:42:09Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-01T06:41:46Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 329.00 +/- 157.97
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DAIEF -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DAIEF -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DAIEF
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 10000),
('learning_starts', 100000),
('n_timesteps', 5000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Battugs/wav2vec2-large-xlsr-53_mn_commonvoice_11
|
Battugs
| 2024-03-01T06:37:18Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-01T05:39:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tjkmitl/SadNews_3_25epochs
|
tjkmitl
| 2024-03-01T06:35:11Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:csebuetnlp/mT5_multilingual_XLSum",
"base_model:finetune:csebuetnlp/mT5_multilingual_XLSum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T06:33:40Z |
---
base_model: csebuetnlp/mT5_multilingual_XLSum
tags:
- generated_from_trainer
model-index:
- name: SadNews_3_5epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SadNews_3_5epochs
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7195 | 0.76 | 500 | 3.3037 |
| 0.4958 | 1.52 | 1000 | 3.4470 |
| 0.542 | 2.29 | 1500 | 3.5975 |
| 0.2884 | 3.05 | 2000 | 3.6615 |
| 0.2993 | 3.81 | 2500 | 3.8767 |
| 0.303 | 4.57 | 3000 | 4.0873 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Aishupatil/gemma-Code-Instruct-Finetune-test
|
Aishupatil
| 2024-03-01T06:34:36Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T06:29:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
David-Xu/Mistral-7B-Instruct-v0.3
|
David-Xu
| 2024-03-01T06:34:18Z | 8 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-02-28T06:47:44Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
aisuko/ft-vit-with-food-101
|
aisuko
| 2024-03-01T06:30:17Z | 204 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-09T12:20:59Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ft-vit-with-food-101
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-vit-with-food-101
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4110
- Accuracy: 0.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.57 | 1 | 4.5942 | 0.0 |
| No log | 1.14 | 2 | 4.4092 | 0.5 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
prateeky2806/toy_peft_model-new
|
prateeky2806
| 2024-03-01T06:30:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-01T06:04:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dev-cuai/Reinforce-Pixelcopter-PLE-v0-03011528
|
dev-cuai
| 2024-03-01T06:28:16Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-01T06:28:14Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-03011528
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 69.00 +/- 42.57
name: mean_reward
verified: false
---
# **Reinforce** Agent playing ****
This is a trained model of a **Reinforce** agent playing **** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DAIEF/q-learning-Taxi-v3
|
DAIEF
| 2024-03-01T06:27:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-01T05:12:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="DAIEF/q-learning-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DAIEF/PPO-test-LunarLander-v2
|
DAIEF
| 2024-03-01T06:23:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-01T06:23:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -1269.17 +/- 691.43
name: mean_reward
verified: false
---
# **PPO-test** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-test** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tjkmitl/SadNews_2_20epochs
|
tjkmitl
| 2024-03-01T06:17:22Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:csebuetnlp/mT5_multilingual_XLSum",
"base_model:finetune:csebuetnlp/mT5_multilingual_XLSum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T06:15:53Z |
---
base_model: csebuetnlp/mT5_multilingual_XLSum
tags:
- generated_from_trainer
model-index:
- name: SadNews_2_20epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SadNews_2_20epochs
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3998 | 0.76 | 500 | 2.8591 |
| 2.8316 | 1.52 | 1000 | 2.5640 |
| 2.3969 | 2.29 | 1500 | 2.4411 |
| 1.8762 | 3.05 | 2000 | 2.4241 |
| 1.9914 | 3.81 | 2500 | 2.3928 |
| 1.8294 | 4.57 | 3000 | 2.4093 |
| 1.7505 | 5.34 | 3500 | 2.4674 |
| 1.313 | 6.1 | 4000 | 2.5457 |
| 1.5311 | 6.86 | 4500 | 2.4944 |
| 1.2645 | 7.62 | 5000 | 2.5876 |
| 1.0581 | 8.38 | 5500 | 2.6490 |
| 1.2508 | 9.15 | 6000 | 2.7311 |
| 1.3671 | 9.91 | 6500 | 2.7163 |
| 0.9799 | 10.67 | 7000 | 2.7751 |
| 1.03 | 11.43 | 7500 | 2.8911 |
| 0.8382 | 12.2 | 8000 | 2.9649 |
| 1.0784 | 12.96 | 8500 | 2.9205 |
| 0.7969 | 13.72 | 9000 | 3.0321 |
| 0.7965 | 14.48 | 9500 | 3.0564 |
| 0.7737 | 15.24 | 10000 | 3.1322 |
| 0.591 | 16.01 | 10500 | 3.1051 |
| 0.8767 | 16.77 | 11000 | 3.1922 |
| 0.6458 | 17.53 | 11500 | 3.2355 |
| 0.591 | 18.29 | 12000 | 3.2301 |
| 0.6428 | 19.05 | 12500 | 3.2603 |
| 0.7789 | 19.82 | 13000 | 3.2760 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ahwuiot/gemma-Code-Instruct-Finetune-test
|
ahwuiot
| 2024-03-01T06:16:53Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T06:11:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Andyrasika/code-llama-7b-text-to-sql
|
Andyrasika
| 2024-03-01T06:14:36Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-03-01T05:05:35Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
datasets:
- generator
model-index:
- name: code-llama-7b-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.2
|
reshphil23/lab2_8bit_adam
|
reshphil23
| 2024-03-01T06:10:32Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T06:10:17Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 51.44063860285319
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8951
- Bleu: 51.4406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
aisuko/ft-vit-base-patch16-224-in21k-on-food101-lora
|
aisuko
| 2024-03-01T06:04:36Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:food101",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"region:us"
] | null | 2024-01-14T22:43:07Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: ft-vit-base-patch16-224-in21k-on-food101-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-vit-base-patch16-224-in21k-on-food101-lora
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1032
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 2.1032 | 1.0 |
| No log | 2.0 | 4 | 0.8816 | 1.0 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
Lienid/nous-twelve-b
|
Lienid
| 2024-03-01T05:35:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T00:24:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
reshphil23/lab2_efficient
|
reshphil23
| 2024-03-01T05:23:29Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T05:23:14Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 49.93747562363063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9599
- Bleu: 49.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Rasi1610/DeathSe46_series2_model_p1
|
Rasi1610
| 2024-03-01T05:18:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-01-19T09:14:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ThuyNT03/CS505_COQE_viT5_Prompting5_OSAPL
|
ThuyNT03
| 2024-03-01T05:05:02Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T04:02:51Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting5_OSAPL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting5_OSAPL
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
Lifehouse/distilbert-sql-timeout-classifier-2024030111
|
Lifehouse
| 2024-03-01T05:02:33Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:generator",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-01T03:13:55Z |
---
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- accuracy
model-index:
- name: distilbert-sql-timeout-classifier-2024030111
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8218561838189494
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-sql-timeout-classifier-2024030111
This model was trained from scratch on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7922
- Accuracy: 0.8219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1895 | 1.0 | 2181 | 0.6169 | 0.8097 |
| 0.1498 | 2.0 | 4362 | 0.7710 | 0.8146 |
| 0.1169 | 3.0 | 6543 | 0.7922 | 0.8219 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
phdatdt/t5
|
phdatdt
| 2024-03-01T04:53:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-28T04:55:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
McGill-NLP/flan-t5-large-weblinx
|
McGill-NLP
| 2024-03-01T04:44:27Z | 51 | 1 |
transformers
|
[
"transformers",
"safetensors",
"weblinx",
"text-generation-inference",
"web-agents",
"agents",
"text-generation",
"en",
"dataset:McGill-NLP/WebLINX",
"dataset:McGill-NLP/WebLINX-full",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T20:08:17Z |
---
datasets:
- McGill-NLP/WebLINX
- McGill-NLP/WebLINX-full
language:
- en
metrics:
- f1
- iou
- chrf
library_name: transformers
pipeline_tag: text-generation
tags:
- weblinx
- text-generation-inference
- web-agents
- agents
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://huggingface.co/datasets/McGill-NLP/WebLINX">🤗Dataset</a></div>
<div><a href="https://github.com/McGill-NLP/weblinx">💾Code</a></div>
</div>
## Original Model
This model is finetuned on WebLINX using checkpoints previously published on Huggingface Hub.\
[Click here to access the original model.](https://huggingface.co/google/flan-t5-large)
|
McGill-NLP/flan-t5-xl-weblinx
|
McGill-NLP
| 2024-03-01T04:44:05Z | 66 | 1 |
transformers
|
[
"transformers",
"safetensors",
"weblinx",
"text-generation-inference",
"web-agents",
"agents",
"text-generation",
"en",
"dataset:McGill-NLP/WebLINX",
"dataset:McGill-NLP/WebLINX-full",
"arxiv:2402.05930",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T20:08:27Z |
---
datasets:
- McGill-NLP/WebLINX
- McGill-NLP/WebLINX-full
language:
- en
metrics:
- f1
- iou
- chrf
library_name: transformers
pipeline_tag: text-generation
tags:
- weblinx
- text-generation-inference
- web-agents
- agents
license: apache-2.0
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://arxiv.org/abs/2402.05930">📄Paper</a></div>
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://huggingface.co/datasets/McGill-NLP/WebLINX">🤗Dataset</a></div>
<div><a href="https://github.com/McGill-NLP/weblinx">💾Code</a></div>
</div>
## Original Model
This model is finetuned on WebLINX using checkpoints previously published on Huggingface Hub.\
[Click here to access the original model.](https://huggingface.co/google/flan-t5-xl)
|
McGill-NLP/MindAct-large-weblinx
|
McGill-NLP
| 2024-03-01T04:43:38Z | 56 | 0 |
transformers
|
[
"transformers",
"safetensors",
"weblinx",
"text-generation-inference",
"web-agents",
"agents",
"text-generation",
"en",
"dataset:McGill-NLP/WebLINX",
"dataset:McGill-NLP/WebLINX-full",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-10T06:28:53Z |
---
datasets:
- McGill-NLP/WebLINX
- McGill-NLP/WebLINX-full
language:
- en
metrics:
- f1
- iou
- chrf
library_name: transformers
pipeline_tag: text-generation
tags:
- weblinx
- text-generation-inference
- web-agents
- agents
license: apache-2.0
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://huggingface.co/datasets/McGill-NLP/WebLINX">🤗Dataset</a></div>
<div><a href="https://github.com/McGill-NLP/weblinx">💾Code</a></div>
</div>
## Original Model
This model is finetuned on WebLINX using checkpoints previously published on Huggingface Hub.\
[Click here to access the original model.](https://huggingface.co/osunlp/MindAct_ActionPrediction_flan-t5-large)
|
McGill-NLP/MindAct-base-weblinx
|
McGill-NLP
| 2024-03-01T04:43:14Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"weblinx",
"text-generation-inference",
"web-agents",
"agents",
"text-generation",
"en",
"dataset:McGill-NLP/WebLINX",
"dataset:McGill-NLP/WebLINX-full",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-10T06:27:07Z |
---
datasets:
- McGill-NLP/WebLINX
- McGill-NLP/WebLINX-full
language:
- en
metrics:
- f1
- iou
- chrf
library_name: transformers
pipeline_tag: text-generation
tags:
- weblinx
- text-generation-inference
- web-agents
- agents
license: apache-2.0
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://huggingface.co/datasets/McGill-NLP/WebLINX">🤗Dataset</a></div>
<div><a href="https://github.com/McGill-NLP/weblinx">💾Code</a></div>
</div>
## Original Model
This model is finetuned on WebLINX using checkpoints previously published on Huggingface Hub.\
[Click here to access the original model.](https://huggingface.co/osunlp/MindAct_ActionPrediction_flan-t5-base)
|
McGill-NLP/Sheared-LLaMA-1.3B-weblinx
|
McGill-NLP
| 2024-03-01T04:42:24Z | 109 | 1 |
transformers
|
[
"transformers",
"safetensors",
"weblinx",
"text-generation-inference",
"web-agents",
"agents",
"text-generation",
"en",
"dataset:McGill-NLP/WebLINX",
"dataset:McGill-NLP/WebLINX-full",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T20:05:48Z |
---
datasets:
- McGill-NLP/WebLINX
- McGill-NLP/WebLINX-full
language:
- en
metrics:
- f1
- iou
- chrf
library_name: transformers
pipeline_tag: text-generation
tags:
- weblinx
- text-generation-inference
- web-agents
- agents
license: llama2
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://huggingface.co/datasets/McGill-NLP/WebLINX">🤗Dataset</a></div>
<div><a href="https://github.com/McGill-NLP/weblinx">💾Code</a></div>
</div>
## Quickstart
```python
from datasets import load_dataset
from huggingface_hub import snapshot_download
from transformers import pipeline
# Load validation split
valid = load_dataset("McGill-NLP/weblinx", split="validation")
# Download and load the templates
snapshot_download(
"McGill-NLP/WebLINX", repo_type="dataset", allow_patterns="templates/*.txt", local_dir="./"
)
with open('templates/llama.txt') as f:
template = f.read()
turn = valid[0]
turn_text = template.format(**turn)
# Load action model and input the text to get prediction
action_model = pipeline(
model="McGill-NLP/Sheared-LLaMA-1.3B-weblinx", device=0, torch_dtype='auto'
)
out = action_model(turn_text, return_full_text=False, max_new_tokens=64, truncation=True)
pred = out[0]['generated_text']
print("Ref:", turn["action"])
print("Pred:", pred)
```
## Original Model
This model is finetuned on WebLINX using checkpoints previously published on Huggingface Hub.\
[Click here to access the original model.](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B)
## License
This model is derived from LLaMA-2, which can only be used with the [LLaMA 2 Community License Agreement](https://github.com/facebookresearch/llama/blob/main/LICENSE). By using or distributing any portion or element of this model, you agree to be bound by this Agreement.
|
McGill-NLP/Llama-2-7b-chat-weblinx
|
McGill-NLP
| 2024-03-01T04:40:25Z | 20 | 2 |
transformers
|
[
"transformers",
"pytorch",
"weblinx",
"text-generation-inference",
"web-agents",
"agents",
"text-generation",
"en",
"dataset:McGill-NLP/WebLINX",
"dataset:McGill-NLP/WebLINX-full",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T19:29:12Z |
---
datasets:
- McGill-NLP/WebLINX
- McGill-NLP/WebLINX-full
language:
- en
metrics:
- f1
- iou
- chrf
library_name: transformers
pipeline_tag: text-generation
tags:
- weblinx
- text-generation-inference
- web-agents
- agents
license: llama2
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://huggingface.co/datasets/McGill-NLP/WebLINX">🤗Dataset</a></div>
<div><a href="https://github.com/McGill-NLP/weblinx">💾Code</a></div>
</div>
## Quickstart
```python
from datasets import load_dataset
from huggingface_hub import snapshot_download
from transformers import pipeline
# Load validation split
valid = load_dataset("McGill-NLP/weblinx", split="validation")
# Download and load the templates
snapshot_download(
"McGill-NLP/WebLINX", repo_type="dataset", allow_patterns="templates/*.txt", local_dir="./"
)
with open('templates/llama.txt') as f:
template = f.read()
turn = valid[0]
turn_text = template.format(**turn)
# Load action model and input the text to get prediction
action_model = pipeline(
model="McGill-NLP/Llama-2-7b-chat-weblinx", device=0, torch_dtype='auto'
)
out = action_model(turn_text, return_full_text=False, max_new_tokens=64, truncation=True)
pred = out[0]['generated_text']
print("Ref:", turn["action"])
print("Pred:", pred)
```
## Original Model
This model is finetuned on WebLINX using checkpoints previously published on Huggingface Hub.\
[Click here to access the original model.](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
## License
This model is derived from LLaMA-2, which can only be used with the [LLaMA 2 Community License Agreement](https://github.com/facebookresearch/llama/blob/main/LICENSE). By using or distributing any portion or element of this model, you agree to be bound by this Agreement.
|
ycros/DonutHole-8x7B
|
ycros
| 2024-03-01T04:39:08Z | 9 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:merge:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:KoboldAI/Mixtral-8x7B-Holodeck-v1",
"base_model:merge:KoboldAI/Mixtral-8x7B-Holodeck-v1",
"base_model:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-28T14:09:56Z |
---
base_model:
- mistralai/Mixtral-8x7B-v0.1
- mistralai/Mixtral-8x7B-v0.1
- Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
- KoboldAI/Mixtral-8x7B-Holodeck-v1
- jondurbin/bagel-dpo-8x7b-v0.2
- mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mergekit
- merge
license: apache-2.0
---
# DonutHole-8x7B
[GGUF versions here](https://huggingface.co/ycros/DonutHole-8x7B-GGUF)
Bagel, Mixtral Instruct, Holodeck, LimaRP.
> What mysteries lie in the hole of a donut?
Good with Alpaca prompt formats, also works with Mistral format. See usage details below.

This is similar to [BagelMIsteryTour](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B), but I've swapped out Sensualize for the new Holodeck.
I'm not sure if it's better or not yet, or how it does at higher (8k+) contexts just yet.
Similar sampler advice applies as for BMT: minP (0.07 - 0.3 to taste) -> temp (either dynatemp 0-4ish, or like a temp of 3-4 with a smoothing factor of around 2.5ish).
And yes, that's temp last. It does okay without rep pen up to a point, it doesn't seem to get into a complete jam, but it can start to repeat sentences,
so you'll probably need some, perhaps 1.02-1.05 at a 1024 range seems okayish.
(rep pen sucks, but there are better things coming).
I've mainly tested with LimaRP style Alpaca prompts (instruction/input/response), and briefly with Mistral's own format.
**Full credit to all the model and dataset authors, I am but a derp with compute and a yaml file.**
---
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
* [KoboldAI/Mixtral-8x7B-Holodeck-v1](https://huggingface.co/KoboldAI/Mixtral-8x7B-Holodeck-v1)
* [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2)
* [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mixtral-8x7B-v0.1
models:
- model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
parameters:
density: 0.5
weight: 0.2
- model: KoboldAI/Mixtral-8x7B-Holodeck-v1
parameters:
density: 0.5
weight: 0.2
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
parameters:
density: 0.6
weight: 1.0
- model: jondurbin/bagel-dpo-8x7b-v0.2
parameters:
density: 0.6
weight: 0.5
merge_method: dare_ties
dtype: bfloat16
```
|
McGill-NLP/fuyu-8b-weblinx
|
McGill-NLP
| 2024-03-01T04:31:47Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"weblinx",
"text-generation-inference",
"web-agents",
"agents",
"text-generation",
"en",
"dataset:McGill-NLP/WebLINX",
"dataset:McGill-NLP/WebLINX-full",
"arxiv:2402.05930",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T20:10:54Z |
---
datasets:
- McGill-NLP/WebLINX
- McGill-NLP/WebLINX-full
language:
- en
metrics:
- f1
- iou
- chrf
library_name: transformers
pipeline_tag: text-generation
tags:
- weblinx
- text-generation-inference
- web-agents
- agents
license: cc-by-nc-4.0
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://arxiv.org/abs/2402.05930">📄Paper</a></div>
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://huggingface.co/datasets/McGill-NLP/WebLINX">🤗Dataset</a></div>
<div><a href="https://github.com/McGill-NLP/weblinx">💾Code</a></div>
</div>
## Original Model
This model is finetuned on WebLINX using checkpoints previously published on Huggingface Hub.\
[Click here to access the original model.](https://huggingface.co/adept/fuyu-8b)
|
lewtun/gemma-7b-sft-full-openhermes-v0
|
lewtun
| 2024-03-01T04:30:19Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/OpenHermes-2.5",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-29T23:46:55Z |
---
license: other
base_model: google/gemma-7b
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/OpenHermes-2.5
model-index:
- name: gemma-7b-sft-full-openhermes-v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-7b-sft-full-openhermes-v0
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the HuggingFaceH4/OpenHermes-2.5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
cvzion/lora-msistral7b-dqg-v5
|
cvzion
| 2024-03-01T04:25:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-01T04:24:50Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** cvzion
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/ChatMusician-hf-4bit-mlx
|
mlx-community
| 2024-03-01T04:14:48Z | 7 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2024-03-01T02:56:12Z |
---
language:
- en
license: mit
tags:
- mlx
metrics:
- accuracy
pipeline_tag: text-generation
---
# mlx-community/ChatMusician-hf-4bit-mlx
This model was converted to MLX format from [`m-a-p/ChatMusician`]().
Refer to the [original model card](https://huggingface.co/m-a-p/ChatMusician) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
import re
from string import Template
from mlx_lm.utils import generate, load
prompt_template = Template("Human: ${inst} </s> Assistant: ")
model, tokenizer = load("mlx-community/ChatMusician-hf-4bit-mlx")
instruction = """"Develop a musical piece using the given chord progression.
'Dm', 'C', 'Dm', 'Dm', 'C', 'Dm', 'C', 'Dm'
"""
prompt = prompt_template.safe_substitute({"inst": instruction})
response = generate(
model=model,
tokenizer=tokenizer,
prompt=prompt,
temp=0.6,
top_p=0.9,
max_tokens=1000,
repetition_penalty=1.1,
)
# pip install symusic
from symusic import Score, Synthesizer
import wave
abc_pattern = r"(X:\d+\n(?:[^\n]*\n)+)"
abc_notation = re.findall(abc_pattern, response + "\n")[0]
s = Score.from_abc(abc_notation)
audio = Synthesizer().render(s, stereo=True)
sample_rate = 44100
audio = (audio * 32767).astype("int16")
with wave.open("cm_music_piece.wav", "w") as wf:
wf.setnchannels(2)
wf.setsampwidth(2)
wf.setframerate(sample_rate)
wf.writeframes(audio.tobytes())
```
|
liminerity/Smaug-slerp-30b-v0.1
|
liminerity
| 2024-03-01T04:01:12Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-01T03:45:26Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- abacusai/Smaug-72B-v0.1
- abacusai/Smaug-72B-v0.1
---
# Smaug-slerpB-v0.1
Smaug-slerpB-v0.1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: abacusai/Smaug-72B-v0.1
layer_range: [0, 32]
- model: abacusai/Smaug-72B-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: abacusai/Smaug-72B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
EddyGiusepe/zephyr-7b-sft-lora
|
EddyGiusepe
| 2024-03-01T03:59:06Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-01T01:39:24Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: zephyr-7b-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-lora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 1.1565 |
| No log | 2.0 | 2 | 1.1548 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
alinerodrigues/wav2vec2-xlsr-1b-mecita-portuguese-all-10
|
alinerodrigues
| 2024-03-01T03:56:05Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-01T00:02:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-1b-mecita-portuguese-all-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-mecita-portuguese-all-10
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-xls-r-1b-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1813
- Wer: 0.0938
- Cer: 0.0330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 27.3451 | 1.0 | 86 | 0.9640 | 0.5666 | 0.1819 |
| 3.9749 | 2.0 | 172 | 0.2877 | 0.1346 | 0.0469 |
| 0.4996 | 3.0 | 258 | 0.2273 | 0.1172 | 0.0421 |
| 0.3552 | 4.0 | 344 | 0.2042 | 0.1107 | 0.0393 |
| 0.2886 | 5.0 | 430 | 0.2033 | 0.1091 | 0.0391 |
| 0.2699 | 6.0 | 516 | 0.1935 | 0.1024 | 0.0377 |
| 0.2384 | 7.0 | 602 | 0.2023 | 0.1074 | 0.0383 |
| 0.2384 | 8.0 | 688 | 0.1939 | 0.0964 | 0.0352 |
| 0.212 | 9.0 | 774 | 0.1929 | 0.1029 | 0.0362 |
| 0.1951 | 10.0 | 860 | 0.1957 | 0.1026 | 0.0374 |
| 0.1637 | 11.0 | 946 | 0.2004 | 0.0971 | 0.0355 |
| 0.1877 | 12.0 | 1032 | 0.1847 | 0.0943 | 0.0335 |
| 0.1725 | 13.0 | 1118 | 0.1813 | 0.0938 | 0.0330 |
| 0.1405 | 14.0 | 1204 | 0.1911 | 0.0940 | 0.0336 |
| 0.1405 | 15.0 | 1290 | 0.1926 | 0.0959 | 0.0345 |
| 0.163 | 16.0 | 1376 | 0.1868 | 0.0959 | 0.0353 |
| 0.1355 | 17.0 | 1462 | 0.1965 | 0.0957 | 0.0351 |
| 0.1408 | 18.0 | 1548 | 0.1970 | 0.1021 | 0.0361 |
| 0.1276 | 19.0 | 1634 | 0.2001 | 0.0938 | 0.0349 |
| 0.1392 | 20.0 | 1720 | 0.1935 | 0.0909 | 0.0338 |
| 0.117 | 21.0 | 1806 | 0.1943 | 0.0976 | 0.0355 |
| 0.117 | 22.0 | 1892 | 0.2038 | 0.0995 | 0.0367 |
| 0.1369 | 23.0 | 1978 | 0.1898 | 0.0964 | 0.0341 |
| 0.1168 | 24.0 | 2064 | 0.1883 | 0.0919 | 0.0333 |
| 0.1128 | 25.0 | 2150 | 0.1822 | 0.0897 | 0.0324 |
| 0.1193 | 26.0 | 2236 | 0.1852 | 0.0931 | 0.0324 |
| 0.1023 | 27.0 | 2322 | 0.1934 | 0.0981 | 0.0329 |
| 0.1127 | 28.0 | 2408 | 0.1967 | 0.0885 | 0.0319 |
| 0.1127 | 29.0 | 2494 | 0.1878 | 0.0859 | 0.0322 |
| 0.0896 | 30.0 | 2580 | 0.1898 | 0.0897 | 0.0328 |
| 0.0917 | 31.0 | 2666 | 0.1935 | 0.0921 | 0.0331 |
| 0.0936 | 32.0 | 2752 | 0.1985 | 0.0916 | 0.0332 |
| 0.1012 | 33.0 | 2838 | 0.2010 | 0.0931 | 0.0336 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
McGill-NLP/Llama-2-13b-chat-weblinx
|
McGill-NLP
| 2024-03-01T03:54:33Z | 12 | 3 |
transformers
|
[
"transformers",
"pytorch",
"weblinx",
"text-generation-inference",
"web-agents",
"agents",
"text-generation",
"en",
"dataset:McGill-NLP/WebLINX",
"dataset:McGill-NLP/WebLINX-full",
"arxiv:2402.05930",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T08:58:05Z |
---
datasets:
- McGill-NLP/WebLINX
- McGill-NLP/WebLINX-full
language:
- en
metrics:
- f1
- iou
- chrf
library_name: transformers
pipeline_tag: text-generation
tags:
- weblinx
- text-generation-inference
- web-agents
- agents
license: llama2
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://arxiv.org/abs/2402.05930">📄Paper</a></div>
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://huggingface.co/datasets/McGill-NLP/WebLINX">🤗Dataset</a></div>
<div><a href="https://github.com/McGill-NLP/weblinx">💾Code</a></div>
</div>
## Quickstart
```python
from datasets import load_dataset
from huggingface_hub import snapshot_download
from transformers import pipeline
# Load validation split
valid = load_dataset("McGill-NLP/weblinx", split="validation")
# Download and load the templates
snapshot_download(
"McGill-NLP/WebLINX", repo_type="dataset", allow_patterns="templates/*.txt", local_dir="./"
)
with open('templates/llama.txt') as f:
template = f.read()
turn = valid[0]
turn_text = template.format(**turn)
# Load action model and input the text to get prediction
action_model = pipeline(
model="McGill-NLP/Llama-2-13b-chat-weblinx", device=0, torch_dtype='auto'
)
out = action_model(turn_text, return_full_text=False, max_new_tokens=64, truncation=True)
pred = out[0]['generated_text']
print("Ref:", turn["action"])
print("Pred:", pred)
```
## Original Model
This model is finetuned on WebLINX using checkpoints previously published on Huggingface Hub.\
[Click here to access the original model.](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
## License
This model is derived from LLaMA-2, which can only be used with the [LLaMA 2 Community License Agreement](https://github.com/facebookresearch/llama/blob/main/LICENSE). By using or distributing any portion or element of this model, you agree to be bound by this Agreement.
|
jimboHsueh/T-patcher-pretrained-bart
|
jimboHsueh
| 2024-03-01T03:54:20Z | 179 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-01T02:55:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.