modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 12:28:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 12:28:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Qwen/Qwen2.5-0.5B
|
Qwen
| 2024-09-25T12:32:36Z | 380,719 | 180 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-15T12:15:39Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# Qwen2.5-0.5B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
schmillan/pramstorage
|
schmillan
| 2024-09-25T12:31:39Z | 9 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-25T10:34:14Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: PRS
---
# Pramstorage
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PRS` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('schmillan/pramstorage', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
SongTonyLi/OpenELM-450M-DPO-D1-HuggingFaceH4-ultrafeedback_binarized-Xlarge
|
SongTonyLi
| 2024-09-25T12:31:07Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"openelm",
"text-generation",
"trl",
"dpo",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-09-25T12:30:32Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jios/TON_IoT_no_ddos
|
Jios
| 2024-09-25T12:27:06Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-25T11:00:32Z |
---
library_name: transformers
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
model-index:
- name: TON_IoT_no_ddos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TON_IoT_no_ddos
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1445 | 1.0 | 750 | 0.0046 |
| 0.0154 | 2.0 | 1500 | 0.0050 |
| 0.0082 | 3.0 | 2250 | 0.0050 |
| 0.0134 | 4.0 | 3000 | 0.0038 |
| 0.0018 | 5.0 | 3750 | 0.0024 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
f10solutions/tiny-llama-merged
|
f10solutions
| 2024-09-25T12:24:45Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-09-25T12:19:15Z |
---
base_model: unsloth/tinyllama-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** f10solutions
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Itamarnie/ppo-Huggy
|
Itamarnie
| 2024-09-25T12:23:47Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-09-25T12:23:37Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Itamarnie/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
slimaneMakh/MultiLBinSClass_Property_Plant_and_Equipment_17june_student_XLMR
|
slimaneMakh
| 2024-09-25T12:11:11Z | 94 | 1 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-17T10:05:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sodowo/doc_meta
|
sodowo
| 2024-09-25T12:10:26Z | 32 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-09-25T09:48:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nbeerbower/mistral-nemo-gutenberg2-12B-test
|
nbeerbower
| 2024-09-25T12:08:23Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-24T19:37:04Z |
---
license: apache-2.0
library_name: transformers
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
datasets:
- nbeerbower/gutenberg2-dpo
model-index:
- name: mistral-nemo-gutenberg2-12B-test
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 33.85
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-gutenberg2-12B-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 32.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-gutenberg2-12B-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 10.2
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-gutenberg2-12B-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.95
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-gutenberg2-12B-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.97
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-gutenberg2-12B-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.39
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-gutenberg2-12B-test
name: Open LLM Leaderboard
---
# mistral-nemo-gutenberg2-12B-test
[mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) finetuned on [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo).
This model is a test for the sake of benchmarking my gutenberg2 dataset.
### Method
Finetuned using an RTX 3090 for 3 epochs.
[Fine-tune Llama 3 with ORPO](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__mistral-nemo-gutenberg2-12B-test)
| Metric |Value|
|-------------------|----:|
|Avg. |20.73|
|IFEval (0-Shot) |33.85|
|BBH (3-Shot) |32.04|
|MATH Lvl 5 (4-Shot)|10.20|
|GPQA (0-shot) | 8.95|
|MuSR (0-shot) |10.97|
|MMLU-PRO (5-shot) |28.39|
|
vansh02062002/custom-sentiment-model
|
vansh02062002
| 2024-09-25T12:06:00Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-25T12:05:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
downtown1/google-gemma-2b-1727265155
|
downtown1
| 2024-09-25T11:52:46Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2024-09-25T11:52:35Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
spow12/ChatWaifu_22B_v2.0_preview
|
spow12
| 2024-09-25T11:50:37Z | 10 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"nsfw",
"Visual novel",
"roleplay",
"mergekit",
"merge",
"conversational",
"en",
"ja",
"dataset:roleplay4fun/aesir-v1.1",
"dataset:kalomaze/Opus_Instruct_3k",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted",
"dataset:SkunkworksAI/reasoning-0.01",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:finetune:mistralai/Mistral-Small-Instruct-2409",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-23T07:51:45Z |
---
language:
- en
- ja
license: cc-by-nc-4.0
library_name: transformers
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
base_model:
- mistralai/Mistral-Small-Instruct-2409
datasets:
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- SkunkworksAI/reasoning-0.01
pipeline_tag: text-generation
model-index:
- name: ChatWaifu_22B_v2.0_preview
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 67.45
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 45.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 16.31
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.72
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.53
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 33.2
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_22B_v2.0_preview
name: Open LLM Leaderboard
---
# Model Card for Model ID

Merged model using [mergekit](https://github.com/arcee-ai/mergekit/tree/main/mergekit)
This model aimed to act like visual novel character.
## Merge Format
```yaml
models:
- model: mistralai/Mistral-Small-Instruct-2409_SFT
layer_range: [0, 56]
- model: mistralai/Mistral-Small-Instruct-2409
layer_range: [0, 56]
merge_method: slerp
base_model: mistralai/Mistral-Small-Instruct-2409_SFT
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
# WaifuModel Collections
- [TTS](https://huggingface.co/spow12/visual_novel_tts)
- [Chat](https://huggingface.co/spow12/ChatWaifu_22B_v2.0)
- [ASR](https://huggingface.co/spow12/Visual-novel-transcriptor)
# Unified demo
[WaifuAssistant](https://github.com/yw0nam/WaifuAssistant)
# Update 2.0
- 2024.09.23 Update 22B, Ver 2.0
## Model Details
### Model Description
- **Developed by:** spow12(yw_nam)
- **Shared by :** spow12(yw_nam)
- **Model type:** CausalLM
- **Language(s) (NLP):** japanese. English
- **Finetuned from model :** [mistralai/Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409)
Currently, chatbot has below personality.
character | visual_novel |
--- | --- |
ムラサメ | Senren*Banka |
茉子 | Senren*Banka |
芳乃 | Senren*Banka |
レナ | Senren*Banka |
千咲 | Senren*Banka |
芦花 | Senren*Banka |
愛衣 | Café Stella and the Reaper's Butterflies |
栞那 | Café Stella and the Reaper's Butterflies |
ナツメ | Café Stella and the Reaper's Butterflies |
希 | Café Stella and the Reaper's Butterflies |
涼音 | Café Stella and the Reaper's Butterflies |
あやせ | Riddle Joker |
七海 | Riddle Joker |
羽月 | Riddle Joker |
茉優 | Riddle Joker |
小春 | Riddle Joker |
But you can chat your own Character with persona text.
Feel free to test.
Your feedback will be helpful for improving model.
### Dataset
Riddle Joker(Prviate)
Café Stella and the Reaper's Butterflies(Private)
Senren*Banka(Private)
roleplay4fun/aesir-v1.1
kalomaze/Opus_Instruct_3k
Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
Aratako/Synthetic-JP-EN-Coding-Dataset-567k (only using 50000 sample)
Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
SkunkworksAI/reasoning-0.01
### Feature
- Fluent Chat performance
- Reduce repetition problem when generate with many turn(over 20~30)
- Zero Shot character persona using description of character.
- 128k context window
- Memory ability that does not forget even after long-context generation
## Demo
You can use Demo in google colab.
Check [Here](https://colab.research.google.com/drive/194_FN28reEPTwS51dwpLLBBwEfeoBjP9?usp=sharing)
## Bias, Risks, and Limitations
This model can generate NSFW content.
## Use & Credit
This model is currently available for non-commercial & Research purpose only.
Also, since I'm not detailed in licensing, I hope you use this model responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and Waifu Lovers).
## Citation
```bibtex
@misc {ChatWaifu_22B_v2.0
author = { YoungWoo Nam },
title = { ChatWaifu_22B_v2.0_preview },
year = 2024,
url = { https://huggingface.co/spow12/ChatWaifu_22B_v2.0_preview },
publisher = { Hugging Face }
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_spow12__ChatWaifu_22B_v2.0_preview)
| Metric |Value|
|-------------------|----:|
|Avg. |29.12|
|IFEval (0-Shot) |67.45|
|BBH (3-Shot) |45.49|
|MATH Lvl 5 (4-Shot)|16.31|
|GPQA (0-shot) | 8.72|
|MuSR (0-shot) | 3.53|
|MMLU-PRO (5-shot) |33.20|
|
Anitha008/Malayalam_QA_model
|
Anitha008
| 2024-09-25T11:41:02Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-09-25T11:40:38Z |
---
library_name: transformers
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: Malayalam_QA_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Malayalam_QA_model
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 343 | 0.0000 |
| 0.0005 | 2.0 | 686 | 0.0000 |
| 0.0007 | 3.0 | 1029 | 0.0000 |
| 0.0007 | 4.0 | 1372 | 0.0000 |
| 0.0 | 5.0 | 1715 | 0.0000 |
| 0.0 | 6.0 | 2058 | 0.0000 |
| 0.0 | 7.0 | 2401 | 0.0000 |
| 0.0 | 8.0 | 2744 | 0.0000 |
| 0.0 | 9.0 | 3087 | 0.0000 |
| 0.0 | 10.0 | 3430 | 0.0000 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
SamagraDataGov/whisper-hindi-test
|
SamagraDataGov
| 2024-09-25T11:35:02Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-25T10:28:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
acjap/flux_model
|
acjap
| 2024-09-25T11:02:13Z | 31 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-21T22:38:34Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# flux_model
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/acjap/flux_model/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
Ruthvik23/whisper-small-hi
|
Ruthvik23
| 2024-09-25T10:52:51Z | 62 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-25T06:38:59Z |
---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - jiobrain
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: None
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 34.48742910353001
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - jiobrain
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5730
- Wer: 34.4874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.003 | 9.7800 | 1000 | 0.4521 | 34.2123 |
| 0.0004 | 19.5599 | 2000 | 0.5207 | 34.3562 |
| 0.0001 | 29.3399 | 3000 | 0.5607 | 34.1404 |
| 0.0001 | 39.1198 | 4000 | 0.5730 | 34.4874 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
jancd/Llama-3.1-8B-instructions_python_18k_alpaca
|
jancd
| 2024-09-25T10:44:30Z | 29 | 1 | null |
[
"gguf",
"llama",
"en",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:quantized:meta-llama/Llama-3.1-8B",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-09-25T09:24:53Z |
---
license: mit
datasets:
- iamtarun/python_code_instructions_18k_alpaca
language:
- en
base_model:
- meta-llama/Meta-Llama-3.1-8B
---
|
xomad/gliner-model-merge-large-v1.0
|
xomad
| 2024-09-25T10:38:59Z | 521 | 18 |
gliner
|
[
"gliner",
"pytorch",
"NER",
"token-classification",
"en",
"dataset:knowledgator/GLINER-multi-task-synthetic-data",
"dataset:EmergentMethods/AskNews-NER-v0",
"dataset:urchade/pile-mistral-v0.1",
"dataset:MultiCoNER/multiconer_v2",
"dataset:DFKI-SLT/few-nerd",
"arxiv:2203.05482",
"arxiv:2406.12925",
"arxiv:2311.08526",
"base_model:knowledgator/gliner-multitask-large-v0.5",
"base_model:finetune:knowledgator/gliner-multitask-large-v0.5",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2024-09-24T10:40:22Z |
---
license: apache-2.0
language:
- en
metrics:
- f1
- recall
- precision
tags:
- NER
pipeline_tag: token-classification
library_name: gliner
datasets:
- knowledgator/GLINER-multi-task-synthetic-data
- EmergentMethods/AskNews-NER-v0
- urchade/pile-mistral-v0.1
- MultiCoNER/multiconer_v2
- DFKI-SLT/few-nerd
base_model: knowledgator/gliner-multitask-large-v0.5
---

The `xomad/gliner-model-merge-large-v1.0` model is developed from the pretrained model `knowledgator/gliner-multitask-large-v0.5` to explore the capabilities of model merging techniques, resulting in a significant performance boost of 3.25 points, elevating the model's capability from 0.6276 to 0.6601 F1-score.
The model is trained exclusively on datasets with commercial-friendly licenses to ensure broad applicability under the Apache-2.0 license. The following datasets were used in the training process:
- [knowledgator/GLINER-multi-task-synthetic-data](https://huggingface.co/datasets/knowledgator/GLINER-multi-task-synthetic-data)
- [EmergentMethods/AskNews-NER-v0](https://huggingface.co/datasets/EmergentMethods/AskNews-NER-v0)
- [urchade/pile-mistral-v0.1](https://huggingface.co/datasets/urchade/pile-mistral-v0.1)
- [MultiCoNER/multiconer_v2](https://huggingface.co/datasets/MultiCoNER/multiconer_v2)
- [DFKI-SLT/few-nerd](https://huggingface.co/datasets/DFKI-SLT/few-nerd)
### ⚙️ Finetuning process
The process begins with the base model `knowledgator/gliner-multitask-large-v0.5`. Our model `xomad/gliner-model-merge-large-v1.0` is fine-tuned separately on each of the above datasets , and we save multiple checkpoints along the fine-tuning process. We put all these checkpoints together into a pool and then we apply the [Model soups](https://arxiv.org/abs/2203.05482) technique to produce different merged models:
- `uniform_merged`
- `greedy_on_random`
- `greedy_on_sorted`
Following this, we apply [WiSE-FT](https://openaccess.thecvf.com/content/CVPR2022/html/Wortsman_Robust_Fine-Tuning_of_Zero-Shot_Models_CVPR_2022_paper.html?ref=roboflow-blog) merging technique to pairs of models selected from a group of the above 3 models and the original model to produce the `wise_ft_merged` model. This concludes the 1st finetuning phase.
The process is then repeated in the 2nd finetuning phase, using the `wise_ft_merged` as the new starting point, to produce the final model. The whole finetuning flow is illustrated in the following figure:

The performance of the pool of fine-tuned models and the merged models are evaluated on the `CrossNER`, TwitterNER benchmarks, and plotted in the following 2 figures (as `crossner_f1` and `other_f1` respectively).
The 1st finetuning phase plot:

The 2nd finetuning phase plot:

### 🛠️ Installation
To use this model, you must install the [GLiNER Python library](https://github.com/urchade/GLiNER):
```bash
pip install gliner
```
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using GLiNER.from_pretrained.
### 💻 Usage
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("xomad/gliner-model-merge-large-v1.0")
text = """
Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975 to develop and sell BASIC interpreters for the Altair 8800. During his career at Microsoft, Gates held the positions of chairman, chief executive officer, president and chief software architect, while also being the largest individual shareholder until May 2014.
"""
labels = ["founder", "computer", "software", "position", "date", "company"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
Output:
```
Microsoft => company
Bill Gates => founder
Paul Allen => founder
April 4, 1975 => date
BASIC => software
Altair 8800 => computer
Microsoft => company
chairman => position
chief executive officer => position
president => position
chief software architect => position
May 2014 => date
```
### 📊 Benchmarks:

Performance on different zero-shot NER benchmarks (CrossNER, mit-movie and mit-restaurant), numbers reported from https://huggingface.co/knowledgator/gliner-multitask-large-v0.5:
| Model | F1 Score |
|---------------------------------------------------------------------------------------------------------|-------------|
| [xomad/gliner-model-merge-large-v1.0](https://huggingface.co/xomad/gliner-model-merge-large-v1.0) | **0.6601** |
| [knowledgator/gliner-multitask-v0.5](https://huggingface.co/knowledgator/gliner-multitask-v0.5) | _0.6276_ |
| [numind/NuNER_Zero-span](https://huggingface.co/numind/NuNER_Zero-span) | 0.6196 |
| [gliner-community/gliner_large-v2.5](https://huggingface.co/gliner-community/gliner_large-v2.5) | 0.615 |
| [EmergentMethods/gliner_large_news-v2.1](https://huggingface.co/EmergentMethods/gliner_large_news-v2.1) | 0.5876 |
| [urchade/gliner_large-v2.1](https://huggingface.co/urchade/gliner_large-v2.1) | 0.5754 |
Detailed performance on different datasets:
| Model | Dataset | Precision | Recall | F1 Score | F1 Score (Decimal) |
|------------------------------------|--------------------|-----------|--------|----------|--------------------|
| xomad/gliner-model-merge-large-v1.0 | CrossNER_AI | 62.66% | 57.48% | 59.96% | 0.5996 |
| | CrossNER_literature | 73.28% | 66.42% | 69.68% | 0.6968 |
| | CrossNER_music | 74.89% | 70.67% | 72.72% | 0.7272 |
| | CrossNER_politics | 79.46% | 77.57% | 78.51% | 0.7851 |
| | CrossNER_science | 74.72% | 70.24% | 72.41% | 0.7241 |
| | mit-movie | 67.33% | 57.89% | 62.25% | 0.6225 |
| | mit-restaurant | 54.94% | 40.41% | 46.57% | 0.4657 |
| | **Average** | | | | **0.6601** |
| numind/NuNER_Zero-span | CrossNER_AI | 63.82% | 56.82% | 60.12% | 0.6012 |
| | CrossNER_literature| 73.53% | 58.06% | 64.89% | 0.6489 |
| | CrossNER_music | 72.69% | 67.40% | 69.95% | 0.6995 |
| | CrossNER_politics | 77.28% | 68.69% | 72.73% | 0.7273 |
| | CrossNER_science | 70.08% | 63.12% | 66.42% | 0.6642 |
| | mit-movie | 63.00% | 48.88% | 55.05% | 0.5505 |
| | mit-restaurant | 54.81% | 37.62% | 44.62% | 0.4462 |
| | **Average** | | | | **0.6196** |
| knowledgator/gliner-multitask-v0.5 | CrossNER_AI | 51.00% | 51.11% | 51.05% | 0.5105 |
| | CrossNER_literature | 72.65% | 65.62% | 68.96% | 0.6896 |
| | CrossNER_music | 74.91% | 73.70% | 74.30% | 0.7430 |
| | CrossNER_politics | 78.84% | 77.71% | 78.27% | 0.7827 |
| | CrossNER_science | 69.20% | 65.48% | 67.29% | 0.6729 |
| | mit-movie | 61.29% | 52.59% | 56.60% | 0.5660 |
| | mit-restaurant | 50.65% | 38.13% | 43.51% | 0.4351 |
| | **Average** | | | | **0.6276** |
| gliner-community/gliner_large-v2.5 | CrossNER_AI | 50.85% | 63.03% | 56.29% | 0.5629 |
| | CrossNER_literature | 64.92% | 67.21% | 66.04% | 0.6604 |
| | CrossNER_music | 70.88% | 73.10% | 71.97% | 0.7197 |
| | CrossNER_politics | 72.67% | 72.93% | 72.80% | 0.7280 |
| | CrossNER_science | 61.71% | 68.85% | 65.08% | 0.6508 |
| | mit-movie | 54.63% | 52.83% | 53.71% | 0.5371 |
| | mit-restaurant | 47.99% | 42.13% | 44.87% | 0.4487 |
| | **Average** | | | | **0.6154** |
| urchade/gliner_large-v2.1 | CrossNER_AI | 54.98% | 52.00% | 53.45% | 0.5345 |
| | CrossNER_literature| 59.33% | 56.47% | 57.87% | 0.5787 |
| | CrossNER_music | 67.39% | 66.77% | 67.08% | 0.6708 |
| | CrossNER_politics | 66.07% | 63.76% | 64.90% | 0.6490 |
| | CrossNER_science | 61.45% | 62.56% | 62.00% | 0.6200 |
| | mit-movie | 55.94% | 47.36% | 51.29% | 0.5129 |
| | mit-restaurant | 53.34% | 40.83% | 46.25% | 0.4625 |
| | **Average** | | | | **0.5754** |
| EmergentMethods/gliner_large_news-v2.1| CrossNER_AI | 59.60% | 54.55% | 56.96% | 0.5696 |
| | CrossNER_literature| 65.41% | 56.16% | 60.44% | 0.6044 |
| | CrossNER_music | 67.47% | 63.08% | 65.20% | 0.6520 |
| | CrossNER_politics | 66.05% | 60.07% | 62.92% | 0.6292 |
| | CrossNER_science | 68.44% | 63.57% | 65.92% | 0.6592 |
| | mit-movie | 65.85% | 49.59% | 56.57% | 0.5657 |
| | mit-restaurant | 54.71% | 35.94% | 43.38% | 0.4338 |
| | **Average** | | | | **0.5876** |
### Authors
Hoan Nguyen, at xomad.com
### Citations
```
@misc{wortsman2022modelsoupsaveragingweights,
title={Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time},
author={Mitchell Wortsman and Gabriel Ilharco and Samir Yitzhak Gadre and Rebecca Roelofs and Raphael Gontijo-Lopes and Ari S. Morcos and Hongseok Namkoong and Ali Farhadi and Yair Carmon and Simon Kornblith and Ludwig Schmidt},
year={2022},
eprint={2203.05482},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2203.05482},
}
@InProceedings{Wortsman_2022_CVPR,
author = {Wortsman, Mitchell and Ilharco, Gabriel and Kim, Jong Wook and Li, Mike and Kornblith, Simon and Roelofs, Rebecca and Lopes, Raphael Gontijo and Hajishirzi, Hannaneh and Farhadi, Ali and Namkoong, Hongseok and Schmidt, Ludwig},
title = {Robust Fine-Tuning of Zero-Shot Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {7959-7971}
}
@misc{stepanov2024gliner,
title={GLiNER multi-task: Generalist Lightweight Model for Various Information Extraction Tasks},
author={Ihor Stepanov and Mykhailo Shtopko},
year={2024},
eprint={2406.12925},
archivePrefix={arXiv},
primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
}
@misc{zaratiana2023gliner,
title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer},
author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois},
year={2023},
eprint={2311.08526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
SamagraDataGov/whisper_test_azure
|
SamagraDataGov
| 2024-09-25T10:37:53Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-25T10:37:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kamel/t5-darija-summarization
|
Kamel
| 2024-09-25T10:28:52Z | 131 | 7 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ar",
"ary",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
language:
- ar
- ary
widget:
- text: " كشف الملياردير الميريكاني ومؤسس شركة “مايكروسوفت”، بيل كَيتس، بللي ماعندوش حتى فلوس رقمية، وكيفضل يستثمر فلوسو فالأشياء اللي عندها قيمة، حسب كلامو. جريدة “بريطانية قالت أن تصريحات كَيتس على العملات المشفرة كانت بمناسبة حدث “سولني على أي حاجة”، اللي تنظم على موقع “ريديت” الشهير.بيل كَيتس اللي واصلة لافورتين ديالو ل116 مليار دولار، وهو رابع أغنى رجل فالعالم، جات تصريحاتو بالتزامن مع خسارة العملات الرقمية لتريليون دولار من قيمتها فعام 2022، وضاعت فحوالي 200 مليار دولار من قيمتها ف24 ساعة فقط فوقت سابق من هذا الشهر."
---
# MArSum: Moroccan Articles Summarization dataset
- [Description](#description)
- [Dataset](#dataset)
- [Citation](#citation)
- [License](#license)
## Description
This dataset contains **19,806** news articles written in Moroccan Arabic dialect along with their titles. The articles were crawled from [Goud.ma](http://www.goud.ma) website between 01/01/2018 and 12/31/2020.
The articles are written mainly in Moroccan Arabic dialect (Darija) but some of them contain Modern Standard Arabic (MSA) passages. All the titles are written in Darija.
The following table summarize some tatistics on the MArSum Dataset.
<table class="tg">
<thead>
<tr>
<th class="tg-0pky" rowspan="2">Size</th>
<th class="tg-0pky" colspan="3">Titles length</th>
<th class="tg-0pky" colspan="3">Articles length</th>
</tr>
<tr>
<th class="tg-lqy6">Min.</th>
<th class="tg-lqy6">Max.</th>
<th class="tg-lqy6">Avg.</th>
<th class="tg-lqy6">Min.</th>
<th class="tg-lqy6">Max.</th>
<th class="tg-0lax">Avg.</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-dvpl">19,806</td>
<td class="tg-dvpl">2</td>
<td class="tg-dvpl">74</td>
<td class="tg-dvpl">14.6</td>
<td class="tg-dvpl">30</td>
<td class="tg-dvpl">2964</td>
<td class="tg-0pky">140.7</td>
</tr>
</tbody>
</table>
The following figure describes the creation process of MArSum:

You may refer to our paper, cited below, for more details on this process.
## Dataset
The dataset is split into Train/Test subsets using a 90/10 split strategy. Both subsets are available for direct [donwload](https://github.com/KamelGaanoun/MoroccanSummarization).
## Citation
Please cite the following paper if you decide to use the dataset:
Gaanoun, K., Naira, A. M., Allak, A., & Benelallam, I. (2022). Automatic Text Summarization for Moroccan Arabic Dialect
Using an Artificial Intelligence Approach. In International Conference on Business Intelligence (pp. 158-177). Springer, Cham.
## License
The dataset is distributed under the CC BY 4.0 license.
|
SI2M-Lab/DarijaBERT-arabizi
|
SI2M-Lab
| 2024-09-25T10:25:55Z | 278 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"ar",
"ary",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- ar
- ary
widget:
- text: " Mchit njib [MASK] ."
- text: " Yak nta li [MASK] lih dik lhedra."
- text: " Ach [MASK] daba."
- text: " Lmghrib ajmal [MASK] fl3alam."
---
AIOX Lab and SI2M Lab INSEA have joined forces to offer researchers, industrialists and the NLP (Natural Language Processing) community the first intelligent Open Source system that understands Moroccan dialectal language "Darija".
**DarijaBERT** is the first BERT model for the Moroccan Arabic dialect called “Darija”. It is based on the same architecture as BERT-base, but without the Next Sentence Prediction (NSP) objective. This model is the Arabizi specific version of DarijaBERT and it was trained on a total of ~4.6 Million sequences of Darija dialect written in Latin letters.
The model was trained on a dataset issued from Youtube comments.
More details about DarijaBert are available in the dedicated GitHub [repository](https://github.com/AIOXLABS/DBert)
**Loading the model**
The model can be loaded directly using the Huggingface library:
```python
from transformers import AutoTokenizer, AutoModel
DarijaBERT_tokenizer = AutoTokenizer.from_pretrained("SI2M-Lab/DarijaBERT-arabizi")
DarijaBert_model = AutoModel.from_pretrained("SI2M-Lab/DarijaBERT-arabizi")
```
**Citation**
If you use our models for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```
@article{gaanoun2023darijabert,
title={Darijabert: a Step Forward in Nlp for the Written Moroccan Dialect},
author={Gaanoun, Kamel and Naira, Abdou Mohamed and Allak, Anass and Benelallam, Imade},
year={2023}
}
```
**Acknowledgments**
We gratefully acknowledge Google’s TensorFlow Research Cloud (TRC) program for providing us with free Cloud TPUs.
<font size =2>**Warning**
This model being trained on texts from social networks, it can unfortunately generate toxic outputs reflecting part of the learned data</font>
|
Kamel/AraSDG_MultiClass
|
Kamel
| 2024-09-25T10:18:33Z | 93 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-24T11:49:28Z |
---
library_name: transformers
tags: []
language: ar # <-- my language
widget:
- text: "يواجه العالم اليوم بعضًا من أكبر التحديات التي واجهها منذ عدة أجيال، وهي تحديات تهدد ازدهار الناس واستقرارهم في كافة أنحاء العالم. ووباء الفساد في معظمها.
فللفساد آثار سليبة على كل جانب من جوانب المجتمع، حيث يتشابك تشابكا وثيقا مع الصراعات والاضطرابات مما يهدد التنمية الاجتماعية والاقتصادية ويقوض أسس المؤسسات الديمقراطية وسيادة القانون.
ولا يتبع الفساد الصراع فحسب، بل هو كذلك أحد أسبابه الجذرية في كثير من الأحيان. فهو بتقويضه سيادة القانون يغذي الصراعات ويعيق عمليات إحلال السلام، فضلا عن أنه يفاهم الفقر، ويسهل الاستخدام المُجّرم للموارد، وإتاحة التمويل للنزاع المسلح.
إن منع الفساد وتعزيز الشفافية وتقوية المؤسسات أمر بالغ الأهمية إذا أريد تحقيق الغايات المتوخاة في أهداف التنمية المستدامة.
ويُراد من احتفالية اليوم العالمي لمكافحة الفساد لعام 2023 تسليط الضوء على الصلة الوثيقة بين مكافحة الفساد والسلام والأمن والتنمية. فجوهر تلك الصلة هو فكرة أن التصدي لهذه الجريمة حق للجميع ومسؤوليتهم، وأن التعاون ومشاركة هما ما يمكنا الأشخاص والمؤسسات من التغلب على الأثر السلبي لهذه الجريمة. فهناك دور للدول وللمسؤولين الحكوميين وللموظفين المدنيين ولموظفي إنفاذ القانون وممثلي وسائل الإعلام والقطاع الخاص وللمجتمع المدني وللأوساط الأكاديمية وللجمهور العام وللشباب بصورة خاصة في توحيد العالم ضد الفساد."
---
# Multiclass SDG Detection with ArBERTv2
This model is a multiclass classifier fine-tuned on the ArBERTv2 architecture, designed to identify specific Sustainable Development Goals (SDGs) mentioned in Arabic text. It classifies text into multiple SDG categories once it has been identified as SDG-related.
# Prerequisite
Before running this model, input texts must first be classified as SDG-related using the binary classifier [Kamel/AraSDG_Binary](https://huggingface.co/Kamel/AraSDG_Binary). This model only applies to articles that are confirmed to mention SDGs.
## Model Details
### Intended Use
This model is intended for use in detecting specific SDGs within Arabic text that has already been identified as SDG-related. It can be applied to large collections of articles, reports, or social media texts for content classification across multiple SDG categories.
### How to Use
#### Step 1: Use the Binary SDG Classifier
Ensure that articles are first passed through the binary SDG classifier to determine if they are SDG-related. Only proceed with articles where the binary classifier predicts an SDG-related output.
#### Step 2: Use the Multiclass Model
Once an article is classified as SDG-related, use the following code to predict the specific SDG category.
````python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Kamel/AraSDG_MultiClass")
model = AutoModelForSequenceClassification.from_pretrained("Kamel/AraSDG_MultiClass")
# Example text input (only use if the binary classifier predicts SDG-related)
text = "your Arabic text here"
# Tokenize input
inputs = tokenizer(text, return_tensors="pt")
# Perform inference
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
# Get the predicted class (specific SDG class)
predicted_class = torch.argmax(logits, dim=-1).item()
# Print the result (you can map this to specific SDG labels as needed)
print(f"Predicted SDG class: {predicted_class}")
````
### Training Data
The model was fine-tuned on a dataset of Arabic news articles, each labeled with a specific SDG category (e.g., SDG 1, SDG 2, etc.). The training data was augmented with synthetic content to ensure balanced representation across different SDGs.
### Performance
The model achieves an average macro F1-score of 87%, performing well across a range of SDG categories.
### Limitations
* Prerequisite: This model assumes the input text has already been classified as SDG-related by a binary classifier.
* The model is trained on Modern Standard Arabic (MSA) and may not perform as well on dialectal variations.
* Some SDGs may have more training data than others, leading to potential bias in predictions.
|
student-abdullah/Llama3.1_medicine_fine-tuned_24-09_16bit_gguf
|
student-abdullah
| 2024-09-25T10:17:53Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"torch",
"trl",
"unsloth",
"en",
"dataset:student-abdullah/BigPharma_Generic_Q-A_Format_Augemented_Dataset",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:quantized:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-25T09:26:31Z |
---
base_model: meta-llama/Meta-Llama-3.1-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- torch
- trl
- unsloth
- llama
- gguf
datasets:
- student-abdullah/BigPharma_Generic_Q-A_Format_Augemented_Dataset
---
# Uploaded model
- **Developed by:** student-abdullah
- **License:** apache-2.0
- **Finetuned from model:** meta-llama/Meta-Llama-3.1-8B
- **Created on:** 25th September, 2024
---
# Acknowledgement
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
---
# Model Description
This model is fine-tuned from the meta-llama/Meta-Llama-3.1-8B base model to enhance its capabilities in generating relevant and accurate responses related to generic medications under the PMBJP scheme. The fine-tuning process included the following hyperparameters:
- Fine Tuning Template: Llama 3.1 Q&A
- Max Tokens: 512
- LoRA Alpha: 10
- LoRA Rank (r): 128
- Learning rate: 2e-4
- Gradient Accumulation Steps: 32
- Batch Size: 4
- Qunatization: 16 bits
---
# Model Quantitative Performace
- Training Quantitative Loss: 0.1676 (at final 160th epoch)
---
# Limitations
- Token Limitations: With a max token limit of 512, the model might not handle very long queries or contexts effectively.
- Training Data Limitations: The model’s performance is contingent on the quality and coverage of the fine-tuning dataset, which may affect its generalizability to different contexts or medications not covered in the dataset.
- Potential Biases: As with any model fine-tuned on specific data, there may be biases based on the dataset used for training.
|
heodi510/xlm-roberta-base-finetuned-panx-all
|
heodi510
| 2024-09-25T10:15:50Z | 111 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-25T10:01:45Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1753
- F1: 0.8554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2986 | 1.0 | 835 | 0.1978 | 0.8176 |
| 0.1548 | 2.0 | 1670 | 0.1774 | 0.8409 |
| 0.1007 | 3.0 | 2505 | 0.1753 | 0.8554 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.0.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
mradermacher/Spartan_v1.0-GGUF
|
mradermacher
| 2024-09-25T10:13:05Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Lucyfer1718/Spartan_v1.0",
"base_model:quantized:Lucyfer1718/Spartan_v1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-24T20:13:46Z |
---
base_model: Lucyfer1718/Spartan_v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lucyfer1718/Spartan_v1.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Spartan_v1.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Spartan_v1.0-GGUF/resolve/main/Spartan_v1.0.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
heodi510/xlm-roberta-base-finetuned-panx-en
|
heodi510
| 2024-09-25T10:01:42Z | 121 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-25T09:58:25Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3940
- F1: 0.6785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0081 | 1.0 | 50 | 0.4772 | 0.6028 |
| 0.4589 | 2.0 | 100 | 0.4167 | 0.6756 |
| 0.3734 | 3.0 | 150 | 0.3940 | 0.6785 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.0.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
dagrodiluksha/my_awesome_model
|
dagrodiluksha
| 2024-09-25T09:56:32Z | 95 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-23T05:11:54Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1679
- Accuracy: 0.9593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 61 | 0.1703 | 0.9522 |
| No log | 2.0 | 122 | 0.1679 | 0.9593 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cpu
- Datasets 3.0.0
- Tokenizers 0.19.1
|
ChrisBridges/labse-malach-multilabel
|
ChrisBridges
| 2024-09-25T09:49:59Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"history",
"historical",
"holocaust",
"war",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-07T10:25:54Z |
---
license: mit
language:
- en
metrics:
- f1
- precision
- recall
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- history
- historical
- holocaust
- war
---
# LaBSE-Malach-Multilabel
A multilabel text classification model fine-tuned on a small English subset (Malach ASR) of the Visual History Archive.
Based on LaBSE pretrained weights but it uses the general Hugging Face framework, not sentence-transformers.
Input text segments consisted of ~350 words on average.
Given an input string, the model predicts probablites for 1063 keyword IDs from the VHA ontology.
Typically, probabilities >= 0.5 are "True" if encoding them in a binary vector.
Due to the small training data, the most likely predictions are usually correct but do not meet the threshold.
The mapping from keyword IDs to labels will be added to the repository later.
|
ZahidAhmad/my_awesome_mind_model
|
ZahidAhmad
| 2024-09-25T09:39:42Z | 146 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-09-25T09:34:25Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.04424778761061947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6674
- Accuracy: 0.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6357 | 0.0619 |
| No log | 1.8667 | 7 | 2.6478 | 0.0708 |
| 2.6371 | 2.9333 | 11 | 2.6513 | 0.0885 |
| 2.6371 | 4.0 | 15 | 2.6596 | 0.0531 |
| 2.6371 | 4.8 | 18 | 2.6609 | 0.0354 |
| 2.6207 | 5.8667 | 22 | 2.6649 | 0.0354 |
| 2.6207 | 6.9333 | 26 | 2.6667 | 0.0354 |
| 2.6198 | 8.0 | 30 | 2.6674 | 0.0442 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
hashmarks/finetuning-sentiment-model-3000-samples
|
hashmarks
| 2024-09-25T09:34:55Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-25T09:26:57Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3139
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
nitidpong/water-meter-segmentation-Unet-efficientnet-b4-ReduceLROnPlateau
|
nitidpong
| 2024-09-25T09:27:08Z | 6 | 0 |
segmentation-models-pytorch
|
[
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2024-09-25T09:26:59Z |
---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# Unet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "efficientnet-b4",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_use_batchnorm": True,
"decoder_channels": (256, 128, 64, 32, 16),
"decoder_attention_type": None,
"in_channels": 3,
"classes": 1,
"activation": None,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7447869777679443,
"test_dataset_iou": 0.6949245929718018
}
]
```
## Dataset
Dataset name: water-meter
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
|
davidleiva4999/new-subnet29_upload_c02_S25_034Ij3
|
davidleiva4999
| 2024-09-25T09:25:36Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T09:25:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ananthu-aniraj/pdiscoformer_pimagenet_seg_k_50
|
ananthu-aniraj
| 2024-09-25T09:20:29Z | 9 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"image-classification",
"en",
"arxiv:2407.04538",
"base_model:timm/vit_base_patch14_reg4_dinov2.lvd142m",
"base_model:finetune:timm/vit_base_patch14_reg4_dinov2.lvd142m",
"license:mit",
"region:us"
] |
image-classification
| 2024-09-25T09:08:47Z |
---
pipeline_tag: image-classification
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- image-classification
license: mit
language:
- en
base_model:
- timm/vit_base_patch14_reg4_dinov2.lvd142m
---
# PdiscoFormer PartImageNet Seg Model (K=50)
PdiscoFormer (Vit-base-dinov2-reg4) trained on PartImageNet Seg with K (number of unsupervised parts to discover) set to a value of 50.
PdiscoFormer is a novel method for unsupervised part discovery using self-supervised Vision Transformers which achieves state-of-the-art results for this task, both qualitatively and quantitatively. The code can be found in the following repository: https://github.com/ananthu-aniraj/pdiscoformer
# BibTex entry and citation info
```
@misc{aniraj2024pdiscoformerrelaxingdiscoveryconstraints,
title={PDiscoFormer: Relaxing Part Discovery Constraints with Vision Transformers},
author={Ananthu Aniraj and Cassio F. Dantas and Dino Ienco and Diego Marcos},
year={2024},
eprint={2407.04538},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.04538},
}
|
aman155/xlm-roberta-base-finetuned-panx-de
|
aman155
| 2024-09-25T09:17:10Z | 120 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-25T08:51:50Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1505 | 0.8246 |
| 0.1268 | 2.0 | 1050 | 0.1380 | 0.8503 |
| 0.0794 | 3.0 | 1575 | 0.1363 | 0.8658 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
mradermacher/PsyMedRose-20B-i1-GGUF
|
mradermacher
| 2024-09-25T09:16:11Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Elfrino/PsyMedRose-20B",
"base_model:quantized:Elfrino/PsyMedRose-20B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-09-24T17:23:40Z |
---
base_model: Elfrino/PsyMedRose-20B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Elfrino/PsyMedRose-20B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/PsyMedRose-20B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q2_K.gguf) | i1-Q2_K | 7.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ3_S.gguf) | i1-IQ3_S | 8.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ3_M.gguf) | i1-IQ3_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q4_0.gguf) | i1-Q4_0 | 11.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.9 | |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/PsyMedRose-20B-i1-GGUF/resolve/main/PsyMedRose-20B.i1-Q6_K.gguf) | i1-Q6_K | 16.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
rejjiex/Qwen-Qwen1.5-0.5B-1727255457
|
rejjiex
| 2024-09-25T09:11:10Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-09-25T09:10:58Z |
---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
ngwgsang/bartpho-word-base-visp-s3
|
ngwgsang
| 2024-09-25T09:08:59Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-25T09:08:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sauc-abadal-lloret/gpt-j-6b-ALT-RM-tldr
|
sauc-abadal-lloret
| 2024-09-25T09:06:15Z | 8 | 0 | null |
[
"safetensors",
"gptj",
"en",
"dataset:CarperAI/openai_summarize_tldr",
"arxiv:2407.16970",
"arxiv:2009.01325",
"base_model:CarperAI/openai_summarize_tldr_sft",
"base_model:finetune:CarperAI/openai_summarize_tldr_sft",
"license:mit",
"region:us"
] | null | 2024-09-24T08:01:08Z |
---
license: mit
datasets:
- CarperAI/openai_summarize_tldr
language:
- en
base_model:
- EleutherAI/gpt-j-6b
- CarperAI/openai_summarize_tldr_sft
---
# ALT-RM model (reward model-based feedback)
Fine-tuned **GPT-J (6B)** model on the **TL;DR Summarization** dataset to be better aligned with humans' preferences on summaries, i.e., accounting for axes such as accuracy, coverage, and coherence, following the alignment approach introduced in the [ALT paper](https://www.arxiv.org/abs/2407.16970). This corresponds to the official model checkpoint and the code can be found in [here](https://github.com/sauc-abadal/ALT/tree/main).
# Model description
The alignment process departs from a [SFT checkpoint](https://huggingface.co/CarperAI/openai_summarize_tldr_sft) released by CarperAI and trained using their [trlx](https://github.com/CarperAI/trlx/tree/main/examples/summarize_rlhf) library.
In a nutshell, the ALT method consists on providing textual feedback to on-policy sampled generations to learn the conditional probability distribution of a generation given both the prompt and the feedback. This logic is implemented in a three-stage decoupled pipeline, namely *sampling*, *feedback*, and *training*, where training is based on a language modelling objective by preppending the feedback tokens before the prompt.
In this way, the model learns to discriminate between different generations associated with various feedback types: it learns from both positive and negative examples that encompass the entire feedback spectrum, overcoming one of the main limitations of supervised fine-tuning, which typically learns only from positive demonstrations.
For extensive coverage on the ALT method, please refer to the paper.
In particular, the **ALT-RM** checkpoint collects the feedback by leveraging a [Reward Model](https://huggingface.co/CarperAI/openai_summarize_tldr_rm_checkpoint) to score the generations, and then maps reward quantiles computed for several generations under the same prompt to pre-defined textual feedbacks. For the summarization task on the TL;DR dataset, the mapping from quantiles to feedback employed was:
```python
{'QUANTILE 0': 'Excellent.',
'QUANTILE 1': 'Good.',
'QUANTILE 2': 'Mediocre.',
'QUANTILE 3': 'Bad.',
'QUANTILE 4': 'Horrible.'}
```
Thus, at inference time, the expected aligned behavior can be attained by conditioning the input with the `Excellent.` feedback.
**Related Models:** [ALT-Quark](https://huggingface.co/sauc-abadal-lloret/gpt-j-6b-ALT-Quark-tldr).
# Intended uses & limitations
This model originates from a research project focused on alignment and is intended primarily for research purposes. Commercial use as an off-the-shelf model is discouraged, as it was not designed with such applications in mind. The model is tailored specifically for the summarization task, having been trained on the TL;DR dataset, though some out-of-distribution generalization may be possible for related datasets.
# How to use
You should format the input by preppending the feedback as follows: `Excellent. input: {prompt}`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
checkpoint_path = "sauc-abadal-lloret/gpt-j-6b-ALT-RM-tldr"
tokenizer = AutoTokenizer.from_pretrained(checkpoint_path)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(checkpoint_path)
model.eval()
prompt = "Excellent. input: SUBREDDIT: r/relationship_advice\nTITLE: I'm [18M] going to a party where an old middle \
school crush [17F] is also going.\nPOST: Story time! Back in the summer after 8th grade, I hung out with my group of \
friends everyday for the whole summer. There was this girl in the group and I really liked her. Like I had the biggest \
and dumbest crush on her. I was only 13 so I didn't know shit, but I was thinking she's perfect for me, I gotta marry \
her and all this dumb stuff. The puppy love was so strong I wanted to be a part of her life and I wanted her to be a \
part of my life. I never had the courage to ask her out, and we went to different high schools. Eventually we stopped \
talking but during high school I never really liked anyone else. Every other girl felt dull compared to her. I still \
get nostalgic thinking about her and what would've been different if I had the balls to ask her out. Anyway I'm going \
to a party this Friday and I heard she's coming. I honestly don't know what to do to so this goes great and eventually \
ends up in a relationship.\nTL;DR:"
inputs = tokenizer([prompt], padding=True, truncation=True, return_tensors="pt")
input_seq_len = inputs["input_ids"].shape[1]
generation_config = GenerationConfig(
max_length = 2048,
max_new_tokens = 64,
do_sample = False,
num_beams = 1,
bad_words_ids = None,
num_return_sequences = 1,
return_dict_in_generate = True,
pad_token_id = tokenizer.pad_token_id,
)
outputs = model.generate(**inputs, generation_config=generation_config)
generated_input_ids = outputs["sequences"][:, input_seq_len:]
generated_text = tokenizer.batch_decode(
generated_input_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
generated_text
```
```
[" I have a huge crush on a girl who I never asked out and we went to different high schools. I'm going to a party this Friday and I heard she's coming. I honestly don't know what to do to so this goes great and eventually ends up in a relationship."]
```
## Training data
The model was trained on the TL;DR summarization dataset introduced in the Stiennon et al.'s, ["Learning to Summarize from human feedback"](https://arxiv.org/abs/2009.01325) paper. We employed the dataset version from CarperAI, which can be found in the HuggingFace Hub in [here](CarperAI/openai_summarize_tldr).
## Training procedure
The exact training procedure and hyper-parameters configuration can be found in our paper.
## Variable and metrics
As an evaluation metric, we compute GPT-4 win-rates over PPO on a 1k random subset of the test set. We use the prompt provided in the DPO paper and we ask GPT-4 to compare generations between ALT-RM and Quark and PPO. Furthermore, we report the following metrics computed on the whole test set: average reward model score, perplexity measured by the SFT reference policy as a proxy for fluency, and average length of the generations. In addition, we conduct an out-of-domain evaluation and compute GPT-4 win-rates on 100 articles from the test split of the CNN/DailyMail dataset.
| **Model** | **TL;DR** (In-domain) | **CNN/DailyMail** (Out-of-domain) |
|:---------------:|:---------------------:|:----------------------------------:|
| Quark vs PPO | 0.36 | 0.40 |
| ALT-RM vs PPO | 0.50 | 0.48 |
*Win-rates with GPT-4. TL;DR on 1000 randomly chosen test prompts and CNN/daily mail on 100 randomly chosen test prompts.*
| **Model** | **RM** | **PPL** | **Avg. len** | **# Train** |
|:---------------:|:---------------------:|:----------------------------------:|:----------------------------------:|:----------------------------------:|
| SFT | 2.89 | 1.96 | 31.25 | - |
| Refrences | 2.89 | 11.84 | 32.60 | - |
| PPO | 3.38 | 2.29 | 67.52 | 116k |
| Quark | 3.52 | 1.82 | 49.42 | 19k |
| ALT-RM | 3.58 | 2.20 | 46.14 | 19k |
*TL;DR metrics on the whole test set, including avg. reward model score, perplexity, avg. generations’ length, and number of training prompts.*
## BibTeX entry and citation info
```
@misc{lloret2024aligninglanguagemodelstextual,
title={Towards Aligning Language Models with Textual Feedback},
author={Saüc Abadal Lloret and Shehzaad Dhuliawala and Keerthiram Murugesan and Mrinmaya Sachan},
year={2024},
eprint={2407.16970},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.16970},
}
```
|
NikolayKozloff/EuroLLM-1.7B-Q8_0-GGUF
|
NikolayKozloff
| 2024-09-25T09:05:02Z | 12 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"de",
"es",
"fr",
"it",
"pt",
"pl",
"nl",
"tr",
"sv",
"cs",
"el",
"hu",
"ro",
"fi",
"uk",
"sl",
"sk",
"da",
"lt",
"lv",
"et",
"bg",
"no",
"ca",
"hr",
"ga",
"mt",
"gl",
"zh",
"ru",
"ko",
"ja",
"ar",
"hi",
"base_model:utter-project/EuroLLM-1.7B",
"base_model:quantized:utter-project/EuroLLM-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-25T09:04:49Z |
---
base_model: utter-project/EuroLLM-1.7B
language:
- en
- de
- es
- fr
- it
- pt
- pl
- nl
- tr
- sv
- cs
- el
- hu
- ro
- fi
- uk
- sl
- sk
- da
- lt
- lv
- et
- bg
- 'no'
- ca
- hr
- ga
- mt
- gl
- zh
- ru
- ko
- ja
- ar
- hi
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/EuroLLM-1.7B-Q8_0-GGUF
This model was converted to GGUF format from [`utter-project/EuroLLM-1.7B`](https://huggingface.co/utter-project/EuroLLM-1.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-1.7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/EuroLLM-1.7B-Q8_0-GGUF --hf-file eurollm-1.7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/EuroLLM-1.7B-Q8_0-GGUF --hf-file eurollm-1.7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/EuroLLM-1.7B-Q8_0-GGUF --hf-file eurollm-1.7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/EuroLLM-1.7B-Q8_0-GGUF --hf-file eurollm-1.7b-q8_0.gguf -c 2048
```
|
luaqi/sn29_back_v1
|
luaqi
| 2024-09-25T08:42:43Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T08:39:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TweedBeetle/llama3-test_model_quantized
|
TweedBeetle
| 2024-09-25T08:35:13Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-25T08:33:55Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hanungaddi/2000
|
hanungaddi
| 2024-09-25T08:33:34Z | 13 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-23T12:09:06Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: zipsy_nai
widget:
- text: >-
This is a digital anime-style drawing featuring a female character named
tingyun with anthropomorphic fox-like features. She has large, pointed ears
with white fur tips and a bushy, dark brown tail that curls around her body.
Her skin is a light, smooth complexion, and she has long, wavy brown hair
with red highlights, tied into a high ponytail. She has striking green eyes
with a hint of mischief in them. She is dressed in a revealing white bikini
with intricate with maple leaf accessories. The top is a halter style that
accentuates her ample breasts, while the bottom is a high-waisted design
that highlights her hips and thighs. She also wears a matching transparent
brown sarong covering her hips, which has a floral print pattern. On her
wrist, she has a turquoise bracelet, and she wears large, ornate earrings
that match the colors of her bikini. she sitting on top of a rock enjoying
the scene. The background now features a tranquil spring bath scene,
surrounded by blooming cherry blossom trees. The warm steam rises from the
natural hot spring, and the rocky edges are softened by moss and scattered
petals. The sky above is a soft pink and blue, capturing the serene mood of
a spring evening. The light is gentle and diffused, giving the entire scene
a peaceful, secluded atmosphere, enhancing the soft, watercolor-inspired
shading of the artwork. The color palette is rich and soothing, creating a
harmonious blend between the serene environment and the reflective
expression of the character. the picture has 0 imperfections, smooth and
beautiful picture. no messy spot. the picture has 0 imperfections, smooth
and beautiful picture. no messy spot.
output:
url: images/example_qew9iua90.png
- text: >-
This is a vibrant anime-style digital illustration, featuring a young
chinese woman named Jingliu with a fair skin tone and long, flowing half
updo silver-blue hair adorned with a small blue hair bow on the back tying
her beautiful hair. She has striking red eyes and a gentle, somewhat shy
expression. Her gaze is sideways, giving a subtle, mysterious look from a
side profile. Her physique is slender and curvy, with ample breasts
accentuated by a revealing two-piece bikini. focus on The top is white with
a blue flower pattern, and the bottom is a matching blue fabric tied around
her waist, accompanied by a transparent blue sarong. sitting on top of a
rock enjoying the scene. The background now features a tranquil spring bath
scene, surrounded by blooming cherry blossom trees. The warm steam rises
from the natural hot spring, and the rocky edges are softened by moss and
scattered petals. The sky above is a soft pink and blue, capturing the
serene mood of a spring evening. The light is gentle and diffused, giving
the entire scene a peaceful, secluded atmosphere, enhancing the soft,
watercolor-inspired shading of the artwork. The color palette is rich and
soothing, creating a harmonious blend between the serene environment and the
reflective expression of the character.
output:
url: images/example_g8ymtvwjj.png
---
# 2000
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `zipsy_nai` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('hanungaddi/2000', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
expertai/LLaMAntino-3-SLIMER-IT
|
expertai
| 2024-09-25T08:01:26Z | 5 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pythorch",
"llama-3",
"llamantino",
"zero-shot NER",
"NER",
"conversational",
"it",
"arxiv:2409.15933",
"base_model:swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA",
"base_model:finetune:swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-09T09:12:28Z |
---
language:
- it
pipeline_tag: text-generation
license: llama3
tags:
- facebook
- meta
- pythorch
- llama
- llama-3
- llamantino
- zero-shot NER
- NER
base_model: swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA
---
# SLIMER-IT: Show Less Instruct More Entity Recognition - Italian language
SLIMER-IT is an LLM specifically instructed for zero-shot NER on Italian language.
Github repository: https://github.com/andrewzamai/SLIMER_IT
Instructed on a reduced number of tags (PER, ORG, LOC), it is designed to tackle never-seen-before Named Entity tags by leveraging a prompt enriched with a DEFINITION and GUIDELINES for the NE to be extracted.
Built with Meta Llama 3, based on the Italian instruction-tuned version swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA
<!DOCTYPE html>
<html>
<head>
<title>Instruction Tuning Prompt</title>
<style>
.container {
border: none;
padding: 5px;
width: 300px;
margin: 0 auto;
font-family: Arial, sans-serif;
font-size: 8px;
border-radius: 10px; /* Rounded borders for container */
overflow: hidden; /* Ensure child elements respect container's rounded borders */
}
.header {
background-color: black;
color: white;
padding: 5px;
text-align: center;
font-weight: bold;
font-size: 14px;
border-top-left-radius: 10px; /* Rounded top-left corner */
border-top-right-radius: 10px; /* Rounded top-right corner */
}
.content {
padding: 5px;
}
.definition, .guidelines {
padding: 5px;
border-radius: 10px; /* Rounded borders for definition and guidelines */
}
.definition {
background-color: #ffa3f0;
}
.guidelines {
background-color: #93e2fa;
}
.footer {
background-color: black;
color: white;
padding: 10px;
font-weight: bold;
border-bottom-left-radius: 10px;
border-bottom-right-radius: 10px;
}
</style>
</head>
<body>
<div class="container">
<div class="header">Instruction Tuning Prompt</div>
<div class="content">
<p>Ti viene fornito un input di testo (delimitato da tre virgolette) e un'istruzione.<br>
Leggi il testo e rispondi all'istruzione alla fine.</p>
<p>"""<br>
{input di testo}<br>
"""</p>
<p><b>Istruzione:</b> Estrai tutte le entità di tipo <b>ENTITÀ MITOLOGICA</b> dal testo che hai letto.</p>
<p>Ti vengono fornite una <b>DEFINIZIONE</b> e alcune <b>LINEE GUIDA</b>.</p>
<div class="definition">
<p><b>DEFINIZIONE:</b> <b>ENTITÀ MITOLOGICA</b> denota personaggi, divinità, creature o figure mitologiche provenienti da tradizioni religiose, miti, leggende o folklore.</p>
</div>
<div class="guidelines">
<p><b>LINEE GUIDA:</b> Assicurati di non etichettare come ENTITÀ MITOLOGICA personaggi storici o letterari reali. Ad esempio, 'Alessandro Magno' è un personaggio storico, non una figura mitologica. Inoltre, fai attenzione a distinguere nomi comuni o nomi di luoghi che possono riferirsi anche a figure mitologiche, come 'Diana', che può essere un nome proprio e il nome della dea romana della caccia.</p>
</div>
<p>Restituisci una lista JSON di istanze di questo tipo. Restituisci una lista vuota se non sono presenti istanze.</p>
</div>
<div class="footer"></div>
</div>
</body>
</html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>JSON Template</title>
<style>
body {
font-family: Arial, sans-serif;
line-height: 1.6;
padding: 20px;
}
.description {
font-weight: bold;
color: #333;
margin-bottom: 10px;
}
.template {
background-color: #f0f0f0;
padding: 10px;
border-radius: 5px;
margin-bottom: 20px;
}
.highlight-orange {
color: orange;
font-weight: bold;
}
</style>
</head>
<body>
<div class="description">JSON SLIMER-IT prompt</div>
<div class="template">
<pre>{
"description": "SLIMER prompt for Italian",
"prompt_input": "<|start_header_id|>system<|end_header_id|>\n\n Sei un utile assistente.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nTi viene fornito un input di testo (delimitato da tre virgolette) e un'istruzione. \nLeggi il testo e rispondi all'istruzione alla fine.\n\"\"\"\n{<span class="highlight-orange">input</span>}\n\"\"\"\nIstruzione: Estrai tutte le entità di tipo {<span class="highlight-orange">NE_name</span>} dal testo che hai letto. Ti vengono fornite una DEFINIZIONE e alcune LINEE GUIDA.\nDEFINIZIONE: {<span class="highlight-orange">definition</span>}\nLINEE GUIDA: {<span class="highlight-orange">guidelines</span>}\nRestituisci una lista JSON di istanze di questo tipo. Restituisci una lista vuota se non sono presenti istanze.<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n"
}</pre>
</div>
</body>
</html>
```python
from vllm import LLM, SamplingParams
vllm_model = LLM(model="expertai/SLIMER-IT")
sampling_params = SamplingParams(temperature=0, max_tokens=128)
prompts = [prompter.generate_prompt(instruction, input) for instruction, input in instruction_input_pairs]
responses = vllm_model.generate(prompts, sampling_params)
```
## Citation
If you find SLIMER-IT useful in your research or work, please cite the following paper:
``` latex
@misc{zamai2024slimeritzeroshotneritalian,
title={SLIMER-IT: Zero-Shot NER on Italian Language},
author={Andrew Zamai and Leonardo Rigutini and Marco Maggini and Andrea Zugarini},
year={2024},
eprint={2409.15933},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.15933},
}
```
|
Jahid05/Gemma-2-2b-website-prompt-generation-v2
|
Jahid05
| 2024-09-25T07:45:51Z | 75 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T07:43:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pandalori/autotrain-image-classifier-cats-and-dogs
|
pandalori
| 2024-09-25T07:39:00Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"vit",
"autotrain",
"image-classification",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"region:us"
] |
image-classification
| 2024-09-24T16:03:25Z |
---
tags:
- autotrain
- image-classification
base_model: google/vit-base-patch16-224
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.0158307533711195
f1: 0.9961538461538462
precision: 0.9940298507462687
recall: 0.9982869379014989
auc: 0.9994886327395326
accuracy: 0.9961579509071505
|
mhamilton723/DenseAV-sound
|
mhamilton723
| 2024-09-25T07:32:38Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"denseav",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"arxiv:2406.05629",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-06-12T15:41:05Z |
---
license: mit
tags:
- denseav
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/mhamilton723/DenseAV
- Paper: https://arxiv.org/abs/2406.05629
|
computerandgyein/gemma2-9b-finetuned-PromptEngineering
|
computerandgyein
| 2024-09-25T07:25:13Z | 5 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2-9b",
"base_model:adapter:google/gemma-2-9b",
"license:gemma",
"region:us"
] | null | 2024-09-25T01:36:36Z |
---
base_model: google/gemma-2-9b
library_name: peft
license: gemma
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: gemma2-9b-finetuned-PromptEngineering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma2-9b-finetuned-PromptEngineering
This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Gangseok/animalgpt
|
Gangseok
| 2024-09-25T07:21:00Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-09-25T07:08:07Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: animalgpt
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9196428656578064
---
# animalgpt
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Cat

#### Dog

#### Lion

#### Tiger

#### Wolf

|
anandsh83/gita-text-generation-gpt2
|
anandsh83
| 2024-09-25T07:19:59Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T07:18:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jerseyjerry/Qwen-Qwen2-1.5B-1727248782
|
jerseyjerry
| 2024-09-25T07:19:57Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-1.5B",
"base_model:adapter:Qwen/Qwen2-1.5B",
"region:us"
] | null | 2024-09-25T07:19:43Z |
---
base_model: Qwen/Qwen2-1.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
saad7489/segformer-b1-finetuned-segments-sidewalk-25
|
saad7489
| 2024-09-25T07:18:04Z | 18 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-09-25T07:06:01Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: segformer-b1-finetuned-segments-sidewalk-25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b1-finetuned-segments-sidewalk-25
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Kwon-Seong-Hoon/Weather_gpt
|
Kwon-Seong-Hoon
| 2024-09-25T07:18:00Z | 5 | 1 | null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-09-25T07:17:16Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Weather_gpt
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7410714030265808
---
# Weather_gpt
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### rainy

#### snowy

#### sunny

#### thunderstorm

#### windy

|
SALUTEASD/Qwen-Qwen1.5-0.5B-1727248479
|
SALUTEASD
| 2024-09-25T07:14:48Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-09-25T07:14:40Z |
---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
ChakuChidiya/distilbert-base-uncased-G1
|
ChakuChidiya
| 2024-09-25T07:08:47Z | 49 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-04-23T13:59:59Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-G1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-G1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1894
- Validation Loss: 0.3447
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 2205, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.07}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5347 | 0.3843 | 0 |
| 0.2871 | 0.3327 | 1 |
| 0.1894 | 0.3447 | 2 |
### Framework versions
- Transformers 4.37.0
- TensorFlow 2.15.0
- Datasets 2.14.5
- Tokenizers 0.15.1
|
llllllllllllllllllllllllllleeeeeeeeeeeeee/relationship
|
llllllllllllllllllllllllllleeeeeeeeeeeeee
| 2024-09-25T07:07:31Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-09-25T07:07:15Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: relationship
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5223880410194397
---
# relationship
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Couple

#### Family

#### Friend

|
Sban57/ske_sk
|
Sban57
| 2024-09-25T07:06:24Z | 7 | 0 | null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-09-25T07:06:15Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ske_sk
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9253731369972229
---
# ske_sk
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Pasta

#### Pizza

#### Ramen

|
Kaasiein/kasi-bert-mrpc
|
Kaasiein
| 2024-09-25T07:03:36Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-24T21:11:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nicholasbien/gpt2_lmd_ppo_2
|
nicholasbien
| 2024-09-25T06:58:57Z | 183 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T06:50:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cykpig/Gpters_image_test_cho
|
cykpig
| 2024-09-25T06:58:15Z | 7 | 0 | null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-09-25T06:58:07Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Gpters_image_test_cho
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9639639854431152
---
# Gpters_image_test_cho
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bird

#### Cat

#### Dog

#### Hippo

#### Pig

|
soulgarden/models
|
soulgarden
| 2024-09-25T06:57:58Z | 17 | 0 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-09-09T20:05:33Z |
---
license: apache-2.0
---
|
bluemiracle0214/rare-puppers
|
bluemiracle0214
| 2024-09-25T06:54:01Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-09-25T06:53:55Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9285714030265808
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Cat

#### Chicken

#### Cow

#### Dog

#### Pig

|
andeskyl/bert-base-cased-qnli
|
andeskyl
| 2024-09-25T06:35:00Z | 9 | 0 | null |
[
"safetensors",
"bert",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-09-24T15:46:36Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9077429983525536
---
# bert-base-cased-qnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2835
- Accuracy: 0.9077
## Model description
Please refer to [this repository](https://huggingface.co/google-bert/bert-base-cased).
## Intended uses
This model is for the artifact evaluation of the paper "SHAFT: Secure, Handy, Accurate, and Fast Transformer Inference."
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tim-lawson/mlsae-pythia-70m-deduped-x128-k32-lens
|
tim-lawson
| 2024-09-25T06:33:51Z | 6 | 0 |
mlsae
|
[
"mlsae",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"en",
"license:mit",
"region:us"
] | null | 2024-09-25T06:33:36Z |
---
language: en
library_name: mlsae
license: mit
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/tim-lawson/mlsae
- Docs: [More Information Needed]
|
tim-lawson/mlsae-pythia-70m-deduped-x128-k32-lens-tfm
|
tim-lawson
| 2024-09-25T06:33:32Z | 6 | 0 |
mlsae
|
[
"mlsae",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"en",
"license:mit",
"region:us"
] | null | 2024-09-25T06:33:02Z |
---
language: en
library_name: mlsae
license: mit
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/tim-lawson/mlsae
- Docs: [More Information Needed]
|
andeskyl/bert-base-cased-sst2
|
andeskyl
| 2024-09-25T06:33:22Z | 5 | 0 | null |
[
"safetensors",
"bert",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-09-24T16:05:11Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.926605504587156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-sst2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2890
- Accuracy: 0.9266
## Model description
Please refer to [this repository](https://huggingface.co/google-bert/bert-base-cased).
## Intended uses
This model is for the artifact evaluation of the paper "SHAFT: Secure, Handy, Accurate, and Fast Transformer Inference."
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf
|
RichardErkhov
| 2024-09-25T06:25:58Z | 5 | 0 | null |
[
"gguf",
"arxiv:2405.14734",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-24T22:22:21Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-Instruct-8B-CPO - GGUF
- Model creator: https://huggingface.co/princeton-nlp/
- Original model: https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-CPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-Instruct-8B-CPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q2_K.gguf) | Q2_K | 2.96GB |
| [Llama-3-Instruct-8B-CPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Llama-3-Instruct-8B-CPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Llama-3-Instruct-8B-CPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Llama-3-Instruct-8B-CPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Llama-3-Instruct-8B-CPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q3_K.gguf) | Q3_K | 3.74GB |
| [Llama-3-Instruct-8B-CPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Llama-3-Instruct-8B-CPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Llama-3-Instruct-8B-CPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Llama-3-Instruct-8B-CPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Llama-3-Instruct-8B-CPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Llama-3-Instruct-8B-CPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Llama-3-Instruct-8B-CPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q4_K.gguf) | Q4_K | 4.58GB |
| [Llama-3-Instruct-8B-CPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Llama-3-Instruct-8B-CPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Llama-3-Instruct-8B-CPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Llama-3-Instruct-8B-CPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Llama-3-Instruct-8B-CPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q5_K.gguf) | Q5_K | 5.34GB |
| [Llama-3-Instruct-8B-CPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Llama-3-Instruct-8B-CPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Llama-3-Instruct-8B-CPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q6_K.gguf) | Q6_K | 6.14GB |
| [Llama-3-Instruct-8B-CPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/princeton-nlp_-_Llama-3-Instruct-8B-CPO-gguf/blob/main/Llama-3-Instruct-8B-CPO.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
This is a model released from the preprint: [SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734). Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|
QuantFactory/Violet_Twilight-v0.2-GGUF
|
QuantFactory
| 2024-09-25T06:17:46Z | 109 | 1 | null |
[
"gguf",
"merge",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:anthracite-org/stheno-filtered-v1.1",
"dataset:PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:anthracite-org/kalo_opus_misc_240827",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-09-25T04:28:38Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
license: apache-2.0
tags:
- merge
datasets:
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/stheno-filtered-v1.1
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/nopm_claude_writing_fixed
- anthracite-org/kalo_opus_misc_240827
pipeline_tag: text-generation
model-index:
- name: Violet_Twilight-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 45.32
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 23.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 2.72
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.13
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.61
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.45
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
name: Open LLM Leaderboard
---
[](https://hf.co/QuantFactory)
# QuantFactory/Violet_Twilight-v0.2-GGUF
This is quantized version of [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2) created using llama.cpp
# Original Model Card

Now for something a bit different, Violet_Twilight-v0.2! This model is a SLERP merge of Azure_Dusk-v0.2 and Crimson_Dawn-v0.2!
# Quants!
<strong>full</strong> / [exl2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2-exl2) / [gguf](https://huggingface.co/Epiculous/Violet_Twilight-v0.2-GGUF)
## Prompting
The v0.2 models are trained on ChatML, the prompting structure goes a little something like this:
```
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
```
### Context and Instruct
The v0.2 models are trained on ChatML, please use that Context and Instruct template.
### Current Top Sampler Settings
[Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
[Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>
## Merging
The following config was used to merge Azure Dusk and Crimson Dawn
```yaml
slices:
- sources:
- model: Epiculous/Azure_Dusk-v0.2
layer_range: [0, 40]
- model: Epiculous/Crimson_Dawn-V0.2
layer_range: [0, 40]
merge_method: slerp
base_model: Epiculous/Azure_Dusk-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Epiculous__Violet_Twilight-v0.2)
| Metric |Value|
|-------------------|----:|
|Avg. |18.53|
|IFEval (0-Shot) |45.32|
|BBH (3-Shot) |23.94|
|MATH Lvl 5 (4-Shot)| 2.72|
|GPQA (0-shot) | 2.13|
|MuSR (0-shot) |13.61|
|MMLU-PRO (5-shot) |23.45|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Epiculous__Violet_Twilight-v0.2)
| Metric |Value|
|-------------------|----:|
|Avg. |18.53|
|IFEval (0-Shot) |45.32|
|BBH (3-Shot) |23.94|
|MATH Lvl 5 (4-Shot)| 2.72|
|GPQA (0-shot) | 2.13|
|MuSR (0-shot) |13.61|
|MMLU-PRO (5-shot) |23.45|
|
novalalthoff/wav2vec2-large-robust-id-google-fleurs-10hr-50
|
novalalthoff
| 2024-09-25T06:16:43Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-25T06:14:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
faisy/layoutlmv3-financial-document-classification
|
faisy
| 2024-09-25T06:16:09Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-25T06:15:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
novalalthoff/wav2vec2-large-xlsr-53-id-google-fleurs-10hr-50
|
novalalthoff
| 2024-09-25T06:13:44Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-25T06:11:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trongg/Flux-Dev2Pro_nsfw_fluxtastic-v3
|
trongg
| 2024-09-25T05:57:34Z | 41 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] |
text-to-image
| 2024-09-25T03:16:58Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmedheakl/asm2asm-deepseek-1.3b-100k-x86-arm-O2
|
ahmedheakl
| 2024-09-25T05:37:12Z | 65 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/deepseek-coder-1.3b-instruct",
"base_model:finetune:deepseek-ai/deepseek-coder-1.3b-instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-24T23:35:58Z |
---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: asm2asm-deepseek-1.3b-100k-x86-arm-O2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asm2asm-deepseek-1.3b-100k-x86-arm-O2
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
|
huy2007/fake_model_GGUF
|
huy2007
| 2024-09-25T05:24:39Z | 18 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-25T05:21:50Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** huy2007
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
asr-africa/wolof-1-hour-wav2vec2-xls-r-google-fleurs
|
asr-africa
| 2024-09-25T05:24:15Z | 65 | 1 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fleurs",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-25T04:30:16Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: wolof-1-hour-wav2vec2-xls-r-google-fleurs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fleurs
type: fleurs
config: wo_sn
split: None
args: wo_sn
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/asr-africa-research-team/ASR%20Africa/runs/ogk3j7c0)
# wolof-1-hour-wav2vec2-xls-r-google-fleurs
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0203
- Wer: 1.0
- Cer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:---:|:---:|
| 4.3753 | 25.0 | 200 | 3.0125 | 1.0 | 1.0 |
| 3.003 | 50.0 | 400 | 3.0203 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.0+cu118
- Datasets 2.17.0
- Tokenizers 0.19.1
|
werty1248/Mistral-Nemo-NT-Ko-12B-dpo-GGUF
|
werty1248
| 2024-09-25T05:21:14Z | 10 | 3 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"ko",
"ja",
"zh",
"dataset:zake7749/kyara-chinese-preference-rl-dpo-s0-30K",
"dataset:sionic/ko-dpo-mix-7k-trl-style",
"dataset:kuotient/orca-math-korean-dpo-pairs",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:werty1248/Mistral-Nemo-NT-Ko-12B-dpo",
"base_model:quantized:werty1248/Mistral-Nemo-NT-Ko-12B-dpo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-25T04:48:40Z |
---
base_model: werty1248/Mistral-Nemo-NT-Ko-12B-dpo
datasets:
- zake7749/kyara-chinese-preference-rl-dpo-s0-30K
- sionic/ko-dpo-mix-7k-trl-style
- kuotient/orca-math-korean-dpo-pairs
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
- ko
- ja
- zh
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# werty1248/Mistral-Nemo-NT-Ko-12B-dpo-GGUF
This model was converted to GGUF format from [`werty1248/Mistral-Nemo-NT-Ko-12B-dpo`](https://huggingface.co/werty1248/Mistral-Nemo-NT-Ko-12B-dpo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/werty1248/Mistral-Nemo-NT-Ko-12B-dpo) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo werty1248/Mistral-Nemo-NT-Ko-12B-dpo-GGUF --hf-file mistral-nemo-nt-ko-12b-dpo-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo werty1248/Mistral-Nemo-NT-Ko-12B-dpo-GGUF --hf-file mistral-nemo-nt-ko-12b-dpo-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo werty1248/Mistral-Nemo-NT-Ko-12B-dpo-GGUF --hf-file mistral-nemo-nt-ko-12b-dpo-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo werty1248/Mistral-Nemo-NT-Ko-12B-dpo-GGUF --hf-file mistral-nemo-nt-ko-12b-dpo-q8_0.gguf -c 2048
```
|
nitidpong/water-meter-segmentation-FPN
|
nitidpong
| 2024-09-25T05:10:27Z | 6 | 0 |
segmentation-models-pytorch
|
[
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2024-09-25T05:10:18Z |
---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet50",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7208837866783142,
"test_dataset_iou": 0.6995143294334412
}
]
```
## Dataset
Dataset name: water-meter
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
|
emilioafl/llama-3-8b-Instruct-bnb-4bit-cicd-support
|
emilioafl
| 2024-09-25T05:08:00Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-22T21:26:41Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** emilioafl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JhonatanSeguraP/bot
|
JhonatanSeguraP
| 2024-09-25T04:59:36Z | 183 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T04:59:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
renderartist/coloringbookflux
|
renderartist
| 2024-09-25T04:58:01Z | 11,091 | 35 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-09-25T03:48:14Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: c0l0ringb00k coloring book page, simple cute rocket, white background
output:
url: images/ComfyUI_09627_.png
- text: >-
c0l0ringb00k, simple coloring book easy to color, soda can with the word
"FLUX" on it in cursive style font
output:
url: images/ComfyUI_09731_.png
- text: c0l0ringb00k, simple coloring book easy for toddlers to color, a sword
output:
url: images/ComfyUI_09697_.png
- text: c0l0ringb00k coloring book page, simple cute legos, white background
output:
url: images/ComfyUI_09642_.png
- text: >-
c0l0ringb00k coloring book page, cute Lizard riding a skateboard, white
background
output:
url: images/ComfyUI_09649_.png
- text: c0l0ringb00k coloring book page, cute sheep, white background
output:
url: images/ComfyUI_09653_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: c0l0ringb00k, coloring book, coloring book page
license: creativeml-openrail-m
---
# Coloring Book Flux
<Gallery />
## Model description
Coloring Book Flux is a Flux LoRA trained on a 100 image synthetic dataset that I personally generated, the images in the dataset were mostly human, vehicles and animal illustrations. All of these images were captioned using Joy Caption Batch.
FOR THE BEST RESULTS USE DEIS SAMPLER!
This has been a pet project of mine for a while and it took lots of trial and error to get it to where it is at this stage. I iterated through many failed attempts until finally I began getting consistent results by limiting repeats, increasing epochs and lowering DIM/ALPHA settings in Kohya.
One neat surprise with this one is that you can prompt for colored images in this style as well. When prompting for coloring book pages it might help to explicitly mention a white background.
This can be a very useful resource for coloring books, posters, greeting cards, print on demand, stock imagery, clip art and so much more.
I hope you have as much fun with this as I did creating it!
You have to work a little bit to get desired results and sometimes there is bleeding/blending of subjects but overall the style is present and the results can be really good. This LoRA takes a couple of tries adjusting your prompt and adding tokens to match the style.
## Trigger words
You should use `c0l0ringb00k` to trigger the image generation.
You should use `coloring book` to trigger the image generation.
You should use `coloring book page` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/renderartist/coloringbookflux/tree/main) them in the Files & versions tab.
|
phuongntc/rlhf_thamsoSFT_vietlarge_4000
|
phuongntc
| 2024-09-25T04:42:12Z | 99 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-25T04:39:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jahid05/Gemma-2-2b-it-website-prompt-generator-Q4_K_M-GGUF
|
Jahid05
| 2024-09-25T04:33:08Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Jahid05/Gemma-2-2b-it-website-prompt-generator",
"base_model:quantized:Jahid05/Gemma-2-2b-it-website-prompt-generator",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-25T04:32:51Z |
---
base_model: Jahid05/Gemma-2-2b-it-website-prompt-generator
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# Jahid05/Gemma-2-2b-it-website-prompt-generator-Q4_K_M-GGUF
This model was converted to GGUF format from [`Jahid05/Gemma-2-2b-it-website-prompt-generator`](https://huggingface.co/Jahid05/Gemma-2-2b-it-website-prompt-generator) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Jahid05/Gemma-2-2b-it-website-prompt-generator) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jahid05/Gemma-2-2b-it-website-prompt-generator-Q4_K_M-GGUF --hf-file gemma-2-2b-it-website-prompt-generator-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jahid05/Gemma-2-2b-it-website-prompt-generator-Q4_K_M-GGUF --hf-file gemma-2-2b-it-website-prompt-generator-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jahid05/Gemma-2-2b-it-website-prompt-generator-Q4_K_M-GGUF --hf-file gemma-2-2b-it-website-prompt-generator-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jahid05/Gemma-2-2b-it-website-prompt-generator-Q4_K_M-GGUF --hf-file gemma-2-2b-it-website-prompt-generator-q4_k_m.gguf -c 2048
```
|
ditherr/my_awesome_qa_model
|
ditherr
| 2024-09-25T04:32:48Z | 100 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-09-25T04:15:16Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7294 | 1.0 | 500 | 1.6258 |
| 1.4128 | 2.0 | 1000 | 1.4051 |
| 1.0856 | 3.0 | 1500 | 1.3970 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Jahid05/Gemma-2-2b-it-website-prompt-generator
|
Jahid05
| 2024-09-25T04:30:58Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T04:25:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jkazdan/step_val_5_gemma-2-2b_hs2_iter1_sftsd2
|
jkazdan
| 2024-09-25T04:17:22Z | 6 | 0 | null |
[
"safetensors",
"gemma2",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2024-09-25T04:14:21Z |
---
license: gemma
base_model: google/gemma-2-2b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: step_val_5_gemma-2-2b_hs2_iter1_sftsd2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# step_val_5_gemma-2-2b_hs2_iter1_sftsd2
This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1953
- Num Input Tokens Seen: 295904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 8
- eval_batch_size: 16
- seed: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.3956 | 0 |
| 1.1688 | 0.0511 | 5 | 1.1953 | 295904 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
amazingvince/custom-seq2seq
|
amazingvince
| 2024-09-25T04:13:38Z | 9 | 1 | null |
[
"safetensors",
"custom_seq2seq",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2024-09-03T03:22:28Z |
---
license: apache-2.0
---
|
nitidpong/water-meter-segmentation
|
nitidpong
| 2024-09-25T04:12:23Z | 5 | 1 |
segmentation-models-pytorch
|
[
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2024-09-05T09:03:05Z |
---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# Unet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet50",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_use_batchnorm": True,
"decoder_channels": (256, 128, 64, 32, 16),
"decoder_attention_type": None,
"in_channels": 3,
"classes": 1,
"activation": None,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7155967354774475,
"test_dataset_iou": 0.6878355741500854
}
]
```
## Dataset
Dataset name: water-meter
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
|
jkazdan/step_val_5_gemma-2-2b_hs2_iter1_sftsd0
|
jkazdan
| 2024-09-25T04:06:17Z | 5 | 0 | null |
[
"safetensors",
"gemma2",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2024-09-25T04:03:38Z |
---
license: gemma
base_model: google/gemma-2-2b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: step_val_5_gemma-2-2b_hs2_iter1_sftsd0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# step_val_5_gemma-2-2b_hs2_iter1_sftsd0
This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1944
- Num Input Tokens Seen: 296352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 8
- eval_batch_size: 16
- seed: 0
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.3956 | 0 |
| 1.2528 | 0.0511 | 5 | 1.1944 | 296352 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Jimbo-Joe/ChatWaifu_22B_v2.0_preview-Q8_0-GGUF
|
Jimbo-Joe
| 2024-09-25T04:01:04Z | 6 | 1 |
transformers
|
[
"transformers",
"gguf",
"nsfw",
"Visual novel",
"roleplay",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ja",
"dataset:roleplay4fun/aesir-v1.1",
"dataset:kalomaze/Opus_Instruct_3k",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted",
"dataset:SkunkworksAI/reasoning-0.01",
"base_model:spow12/ChatWaifu_22B_v2.0_preview",
"base_model:quantized:spow12/ChatWaifu_22B_v2.0_preview",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-09-25T03:59:07Z |
---
base_model: spow12/ChatWaifu_22B_v2.0_preview
datasets:
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- SkunkworksAI/reasoning-0.01
language:
- en
- ja
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Jimbo-Joe/ChatWaifu_22B_v2.0_preview-Q8_0-GGUF
This model was converted to GGUF format from [`spow12/ChatWaifu_22B_v2.0_preview`](https://huggingface.co/spow12/ChatWaifu_22B_v2.0_preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/spow12/ChatWaifu_22B_v2.0_preview) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jimbo-Joe/ChatWaifu_22B_v2.0_preview-Q8_0-GGUF --hf-file chatwaifu_22b_v2.0_preview-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jimbo-Joe/ChatWaifu_22B_v2.0_preview-Q8_0-GGUF --hf-file chatwaifu_22b_v2.0_preview-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jimbo-Joe/ChatWaifu_22B_v2.0_preview-Q8_0-GGUF --hf-file chatwaifu_22b_v2.0_preview-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jimbo-Joe/ChatWaifu_22B_v2.0_preview-Q8_0-GGUF --hf-file chatwaifu_22b_v2.0_preview-q8_0.gguf -c 2048
```
|
ragefu/ftxclip20240924model
|
ragefu
| 2024-09-25T03:56:01Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xclip",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-09-25T03:55:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
njbAI/sd-class-butterflies-32
|
njbAI
| 2024-09-25T03:47:18Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-09-25T03:13:47Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('njbAI/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
zequnl/molxpt
|
zequnl
| 2024-09-25T03:45:30Z | 118 | 1 |
transformers
|
[
"transformers",
"pytorch",
"biogpt",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-15T11:04:28Z |
---
library_name: transformers
tags: []
---
# MolXPT
Our model is a variant of GPT pre-trained on SMILES (a sequence representation of molecules) wrapped by text. Our model is based on [BioGPT](https://huggingface.co/microsoft/biogpt) and we redefine the tokenizer.
## Example Usage
```python
from transformers import AutoTokenizer, BioGptForCausalLM
model = BioGptForCausalLM.from_pretrained("zequnl/molxpt")
molxpt_tokenizer = AutoTokenizer.from_pretrained("zequnl/molxpt", trust_remote_code=True)
model = model.cuda()
model.eval()
input_ids = molxpt_tokenizer('<start-of-mol>CC(=O)OC1=CC=CC=C1C(=O)O<end-of-mol> is ', return_tensors="pt").input_ids.cuda()
output = model.generate(
input_ids,
max_new_tokens=300,
num_return_sequences=4,
temperature=0.75,
top_p=0.95,
do_sample=True,
)
for i in range(4):
s = molxpt_tokenizer.decode(output[i])
print(s)
```
## References
For more information, please refer to our paper and GitHub repository.
Paper: [MolXPT: Wrapping Molecules with Text for Generative Pre-training](https://aclanthology.org/2023.acl-short.138/)
Authors: *Zequn Liu, Wei Zhang, Yingce Xia, Lijun Wu, Shufang Xie, Tao Qin, Ming Zhang, Tie-Yan Liu*
|
alan918727/ttt-125M-TinyStories
|
alan918727
| 2024-09-25T03:41:08Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ttt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T03:40:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pramudyalyza/vit-base-patch16-224-emotion-classifier
|
pramudyalyza
| 2024-09-25T03:31:11Z | 177 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-09-25T03:30:55Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: results
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7954
- Accuracy: 0.375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9066 | 1.0 | 40 | 1.9540 | 0.275 |
| 1.76 | 2.0 | 80 | 1.8608 | 0.35 |
| 1.651 | 3.0 | 120 | 1.8128 | 0.3688 |
| 1.5967 | 4.0 | 160 | 1.7954 | 0.375 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
QuantFactory/Vapor_7B-GGUF
|
QuantFactory
| 2024-09-25T03:28:58Z | 48 | 1 |
transformers
|
[
"transformers",
"gguf",
"base_model:Qwen/Qwen2.5-7B",
"base_model:quantized:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-25T02:47:35Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B
library_name: transformers
---
[](https://hf.co/QuantFactory)
# QuantFactory/Vapor_7B-GGUF
This is quantized version of [FourOhFour/Vapor_7B](https://huggingface.co/FourOhFour/Vapor_7B) created using llama.cpp
# Original Model Card
```
base_model: Qwen/Qwen2.5-7B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
type: sharegpt
conversation: chatml
- path: NewEden/Kalo-Opus-Instruct-22k-Refusal-Murdered
type: sharegpt
conversation: chatml
- path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
- path: NewEden/Gryphe-Sonnet-3.5-35k-Subset
type: sharegpt
conversation: chatml
- path: Nitral-AI/Reasoning-1shot_ShareGPT
type: sharegpt
conversation: chatml
- path: Nitral-AI/GU_Instruct-ShareGPT
type: sharegpt
conversation: chatml
- path: Nitral-AI/Medical_Instruct-ShareGPT
type: sharegpt
conversation: chatml
chat_template: chatml
val_set_size: 0.01
output_dir: ./outputs/out
adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
sequence_len: 8192
# sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
wandb_project: qwen7B
wandb_entity:
wandb_watch:
wandb_name: qwen7B
wandb_log_model:
gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed:
fsdp:
fsdp_config:
special_tokens:
pad_token: <pad>
```
|
botbotbot/QQQWEN-R
|
botbotbot
| 2024-09-25T03:24:18Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T03:23:17Z |
---
base_model: unsloth/qwen2.5-1.5b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** botbotbot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lbxrb1221/Llama-3-Taiwan-8B-Instruct-abliterated-v1
|
lbxrb1221
| 2024-09-25T03:22:45Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"en",
"base_model:yentinglin/Llama-3-Taiwan-8B-Instruct",
"base_model:finetune:yentinglin/Llama-3-Taiwan-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T02:39:48Z |
---
language:
- zh
- en
base_model:
- yentinglin/Llama-3-Taiwan-8B-Instruct
pipeline_tag: text-generation
library_name: transformers
---
|
Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
|
Orenguteng
| 2024-09-25T02:49:53Z | 26,382 | 171 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3.1",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-09T20:39:15Z |
---
license: llama3.1
model-index:
- name: Llama-3.1-8B-Lexi-Uncensored-V2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 77.92
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 29.69
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 16.92
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.36
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.77
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.9
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
name: Open LLM Leaderboard
library_name: transformers
---

VERSION 2 Update Notes:
---
- More compliant
- Smarter
- For best response, use this system prompt (feel free to expand upon it as you wish):
Think step by step with a logical reasoning and intellectual sense before you provide any response.
- For more uncensored and compliant response, you can expand the system message differently, or simply enter a dot "." as system message.
- IMPORTANT: Upon further investigation, the Q4 seems to have refusal issues sometimes.
There seems to be some of the fine-tune loss happening due to the quantization. I will look into it for V3.
Until then, I suggest you run F16 or Q8 if possible.

GENERAL INFO:
---
This model is based on Llama-3.1-8b-Instruct, and is governed by [META LLAMA 3.1 COMMUNITY LICENSE AGREEMENT](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3.1 license.
IMPORTANT:
---
Use the same template as the official Llama 3.1 8B instruct.
System tokens must be present during inference, even if you set an empty system message. If you are unsure, just add a short system message as you wish.
FEEDBACK:
---
If you find any issues or have suggestions for improvements, feel free to leave a review and I will look into it for upcoming improvements and next version.

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Orenguteng__Llama-3.1-8B-Lexi-Uncensored-V2)
| Metric |Value|
|-------------------|----:|
|Avg. |27.93|
|IFEval (0-Shot) |77.92|
|BBH (3-Shot) |29.69|
|MATH Lvl 5 (4-Shot)|16.92|
|GPQA (0-shot) | 4.36|
|MuSR (0-shot) | 7.77|
|MMLU-PRO (5-shot) |30.90|
|
sarthakharne/bert-base-pretrain-on-textbooks
|
sarthakharne
| 2024-09-25T02:49:05Z | 198 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"text-classification",
"en",
"arxiv:2406.00314",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-15T13:58:28Z |
---
library_name: transformers
license: cc-by-4.0
pipeline_tag: text-classification
language:
- en
---
# CASE: Efficient Curricular Data Pre-training for Building Assistive Psychology Expert Models
This repository contains the model weights for the paper "CASE: Efficient Curricular Data Pre-training for Building Assistive Psychology Expert Models" presented at EMNLP2024. The paper can be found [here](https://arxiv.org/abs/2406.00314)
## Authors
- Sarthak Harne
- Monjoy Narayan Choudhury
- Madhav Rao
- TK Srikanth
- Seema Mehrotra
- Apoorva Vashisht
- Aarushi Basu
- Manjit Sodhi
## Abstract
The limited availability of psychologists necessitates efficient identification of individuals requiring urgent mental healthcare. This study explores the use of Natural Language Processing (NLP) pipelines to analyze text data from online mental health forums used for consultations. By analyzing forum posts, these pipelines can flag users who may require immediate professional attention. A crucial challenge in this domain is data privacy and scarcity. To address this, we propose utilizing readily available curricular texts used in institutes specializing in mental health for pre-training the NLP pipelines. This helps us mimic the training process of a psychologist. Our work presents CASE-BERT that flags potential mental health disorders based on forum text. CASE-BERT demonstrates superior performance compared to existing methods, achieving an f1 score of 0.91 for Depression and 0.88 for Anxiety, two of the most commonly reported mental health disorders.
|
zzjharry/tinystarcoder-rlhf-model
|
zzjharry
| 2024-09-25T02:47:14Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:bigcode/tiny_starcoder_py",
"base_model:finetune:bigcode/tiny_starcoder_py",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-25T02:46:09Z |
---
library_name: transformers
license: bigcode-openrail-m
base_model: bigcode/tiny_starcoder_py
tags:
- trl
- reward-trainer
- generated_from_trainer
model-index:
- name: tinystarcoder-rlhf-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinystarcoder-rlhf-model
This model is a fine-tuned version of [bigcode/tiny_starcoder_py](https://huggingface.co/bigcode/tiny_starcoder_py) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
linhphanff/dense_256
|
linhphanff
| 2024-09-25T02:16:36Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-09-25T02:09:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.