modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 00:39:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 00:38:59
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
band2001/stolaf-angora-1600
|
band2001
| 2024-04-25T15:43:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"dataset:band2001/stolaf-angora",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T02:41:10Z |
---
license: mit
datasets:
- band2001/stolaf-angora
---
# Model Card for Angora-1600
<!-- Provide a quick summary of what the model is/does. -->
This model has been created to help computer science students at St. Olaf College (Northfield, MN) answer questions about fundamental CS principles as well as questions about the specific technical stacks and procedures St. Olaf Computer Science uses.
## Angora-1600 Details
This model is built off of [Google's Gemma 7b-it](https://huggingface.co/google/gemma-7b-it) model. It was fine tuned with a dataset created with the purpose of addressing St. Olaf specific Computer Science questions. Some of these questions reference the specific instance of git the institution uses or address steps to declare the computer science major. This model was fine-tuned using MLX on an Apple M3 Max Chip. This model was trained for 1600 iterations using LoRA as the method for finetuning.
- **Developed by:** Ben Anderson & Keegan Murray
- **Funded by:** St. Olaf College MSCS Department
- **Model type:** Generative
- **License:** MIT
- **Finetuned from model:** [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it)
<!-- Provide the basic links for the model. -->
- **Repository:** See the GitHub repository [here](https://github.com/band2001/stolaf-angora)
- **Paper:** Coming soon...
- **Demo:** A video demo is available [here](https://drive.google.com/file/d/1iwThVj88FTgLNANZdv2NineRcBXAqtZp/view?usp=sharing).
## Uses
This is intended to be used by Computer Science students at St. Olaf College. While it can be used broadly for general computer science questions, it has been finetuned to answer questions specific to the St. Olaf Computer Science program.
## How to Get Started with the Model
Use the code below to get started with the model.
### Direct Use With Transformers Library
#### Use a pipeline as a high-level helper
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="band2001/stolaf-angora-1600")
```
#### Load model directly
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("band2001/stolaf-angora-1600")
model = AutoModelForCausalLM.from_pretrained("band2001/stolaf-angora-1600", device_map="auto")
input_ids = tokenizer("YOUR PROMPT HERE", return_tensors="pt").to("YOUR DEVICE IF USING GPU ACCELERATION")
outputs = model.generate(**input_ids, max_new_tokens=256)
decoded_output = tokenizer.decode(outputs[0])
```
### Direct Use With MLX Library
Note MLX can only be used with Apple Silicon Macs. It is also recommended to use one of their Max series chips or higher.
```python
from mlx_lm import load, generate
def format_prompt(prompt, system_prompt = "YOUR SYSTEM PROMPT"):
return """<bos><start_of_turn>user
## Instructions
{}
## User
{}<end_of_turn>
<start_of_turn>model
""".format(system_prompt, prompt)
model, tokenizer = load("band2001/stolaf-angora-1600")
prompt = format_prompt("YOUR PROMPT HERE")
decoded_output = generate(
model,
tokenizer,
prompt=prompt,
verbose=True,
temp=0.0,
max_tokens=256,
)
```
### Out-of-Scope Use
Outside of using this model to ask questions about computer science topics (generally and specific to St. Olaf College), this model should not be used for other inference. Asking questions about other topics will likely yield answers; however, they have not been fine-tuned and will most likely contain errors and/or could potentially include offensive content.
## Bias, Risks, and Limitations
As we created the fine-tuning dataset from scratch, it is relatively limited compared to the overall size of the model. Our dataset has about 2000 observations, while the model has roughly 8.5B parameters. So while our dataset had a noticeable effect on the tuning of this model, it still will fall back on other knowledge occasionally and provide partially incorrect answers for St. Olaf specific questions.
Also note the limitations present in the [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) model and assume they are present in this model as well.
## Training Details
### Training Data
The training data can be found in the St. Olaf Angora Dataset ([band2001/stolaf-angora](https://huggingface.co/datasets/band2001/stolaf-angora)).
### Training Procedure
To train the model, the data needs to be in the following format. Note the data in [band2001/stolaf-angora](https://huggingface.co/datasets/band2001/stolaf-angora) already is.
```
<bos><start_of_turn>user
## Instructions
system prompt goes here
## User
prompt/query goes here<end_of_turn>
<start_of_turn>model
model response here (put a response here for tuning purposes)<end_of_turn><eos>
```
Once the data is in the correct format, QLoRA is recommended. The model can be fine-tuned either using mlx-lm and mps (to tune on an Apple Silicon machine) or a bitsandbytes configuration and cuda (to tune on a machine with Nvidia GPUs).
#### Preprocessing
To preprocess your data to be in the correct format outlined above, you can use the following helper function:
```python
def generate_prompt(entry, system_prompt = SYSTEM_PROMPT):
'''
This function formats a question/answer pair to gemma's chat template.
:param: entry - a dictionary with an instruction and a response
:param: system_prompt: the system prompt to be used
:return: the formated string for gemma's chat template
'''
return """<bos><start_of_turn>user
## Instructions
{}
## User
{}<end_of_turn>
<start_of_turn>model
{}<end_of_turn><eos>""".format(system_prompt, entry["instruction"], entry["response"])
```
When trying to use inference with this model, you can format the user's query using this helper function:
```python
def format_prompt(prompt, system_prompt = SYSTEM_PROMPT):
'''
This function formats a question to gemma's chat template.
:param: prompt - a string with the user's query
:param: system_prompt: the system prompt to be used
:return: the formated string for gemma's chat template
'''
return """<bos><start_of_turn>user
## Instructions
{}
## User
{}<end_of_turn>
<start_of_turn>model
""".format(system_prompt, prompt)
```
#### Training Process
The MLX LoRA fine-tuning approach was used. You can learn more about [MLX LoRA here](https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md). The Gemma-7b-it was loaded in without any conversion. The default `batch_size = 16` was chosen and to reach a 1600 iteration fine-tuned model the model was tuned with 800 iterations two times. Once the fine-tuned weights were created, the model was fused using MLX's fuse functionality. You can learn more about [fusing with MLX here](https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md#Fuse-and-Upload). One important change made when fusing with MLX was to change some of the MLX package code to include `"format":"pt"` in the metadata so this model can be used with the transformers library. To do that, the following was done: you can tweak the library code like below in <path_to_your_site-packages>/mlx_lm/utils.py by replacing `mx.save_safetensors(str(shard_path), shard, metadata={"format":"mlx"})` with `mx.save_safetensors(str(shard_path), shard, metadata={"format":"pt"})` to output fused weights with the metadata attribute. Special thanks to [Alexweberk's guide on GitHub](https://gist.github.com/alexweberk/635431b5c5773efd6d1755801020429f) to help solve this issue. Finally, the fused model was uploaded to this HuggingFace repo!
If you look at the GitHub repo for this project, mlx_lora.sh includes the command used for the LoRA fine-tuning, mlx_fuse.sh includes the command for the model fusing, and mlx_upload.sh includes the upload command. There is additionally an optional mlx_convert.sh for converting the Google Gemma 7b-it model before fine-tuning if desired.
## Evaluation
Testing loss and perplexity were the two metrics used to evaluate the Angora models. A summary of the results for all the different iteration models is included below.
### Results
| Number of iterations | Testing Loss | Perplexity |
|:----------|:----------|:---------|
|800 | 0.569 | 1.766 |
| 1600 | 0.302 | 1.352 |
| 2400 | 0.225 | 1.252 |
| 3200 | 0.185 | 1.203 |
| 4000 | 0.170 | 1.185 |
### Testing Data
The testing data is available [here](https://huggingface.co/datasets/band2001/stolaf-angora/viewer/default/test).
## Model Card Contact
Ben Anderson - [ander6@stolaf.edu](mailto:ander6@stolaf.edu)
Keegan Murray - [murray7@stolaf.edu](mailto:murray7@stolaf.edu)
|
nluai/question-generation-vietnamese
|
nluai
| 2024-04-25T15:42:48Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-25T15:18:33Z |
## Model description
This model is a sequence-to-sequence question generator that takes an answer and context as an input and generates a question as an output. It is based on a pre-trained mt5-base by [Google](https://github.com/google-research/multilingual-t5) model.
## Training data
The model was fine-tuned on [XQuAD](https://github.com/deepmind/xquad)
## Example usage
```python
from transformers import MT5ForConditionalGeneration, AutoTokenizer
import torch
model = MT5ForConditionalGeneration.from_pretrained("nluai/question-generation-vietnamese")
tokenizer = AutoTokenizer.from_pretrained("nluai/question-generation-vietnamese")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
# Content used to create a set of questions
context = '''Thành phố Hồ Chí Minh (còn gọi là Sài Gòn) tên gọi cũ trước 1975 là Sài Gòn hay Sài Gòn-Gia Định là thành phố lớn nhất ở Việt Nam về dân số và quy mô đô thị hóa. Đây còn là trung tâm kinh tế, chính trị, văn hóa và giáo dục tại Việt Nam. Thành phố Hồ Chí Minh là thành phố trực thuộc trung ương thuộc loại đô thị đặc biệt của Việt Nam cùng với thủ đô Hà Nội.Nằm trong vùng chuyển tiếp giữa Đông Nam Bộ và Tây Nam Bộ, thành phố này hiện có 16 quận, 1 thành phố và 5 huyện, tổng diện tích 2.061 km². Theo kết quả điều tra dân số chính thức vào thời điểm ngày một tháng 4 năm 2009 thì dân số thành phố là 7.162.864 người (chiếm 8,34% dân số Việt Nam), mật độ dân số trung bình 3.419 người/km². Đến năm 2019, dân số thành phố tăng lên 8.993.082 người và cũng là nơi có mật độ dân số cao nhất Việt Nam. Tuy nhiên, nếu tính những người cư trú không đăng ký hộ khẩu thì dân số thực tế của thành phố này năm 2018 là gần 14 triệu người.'''
encoding = tokenizer.encode_plus(context, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
output = model.generate(input_ids=input_ids, attention_mask=attention_masks, max_length=256)
question = tokenizer.decode(output[0], skip_special_tokens=True,clean_up_tokenization_spaces=True)
question
#question: Thành phố hồ chí minh có bao nhiêu quận?
```
|
band2001/stolaf-angora-3200
|
band2001
| 2024-04-25T15:42:41Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"dataset:band2001/stolaf-angora",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T02:24:54Z |
---
license: mit
datasets:
- band2001/stolaf-angora
---
# Model Card for Angora-3200
<!-- Provide a quick summary of what the model is/does. -->
This model has been created to help computer science students at St. Olaf College (Northfield, MN) answer questions about fundamental CS principles as well as questions about the specific technical stacks and procedures St. Olaf Computer Science uses.
## Angora-3200 Details
This model is built off of [Google's Gemma 7b-it](https://huggingface.co/google/gemma-7b-it) model. It was fine tuned with a dataset created with the purpose of addressing St. Olaf specific Computer Science questions. Some of these questions reference the specific instance of git the institution uses or address steps to declare the computer science major. This model was fine-tuned using MLX on an Apple M3 Max Chip. This model was trained for 3200 iterations using LoRA as the method for finetuning.
- **Developed by:** Ben Anderson & Keegan Murray
- **Funded by:** St. Olaf College MSCS Department
- **Model type:** Generative
- **License:** MIT
- **Finetuned from model:** [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it)
<!-- Provide the basic links for the model. -->
- **Repository:** See the GitHub repository [here](https://github.com/band2001/stolaf-angora)
- **Paper:** Coming soon...
- **Demo:** A video demo is available [here](https://drive.google.com/file/d/1iwThVj88FTgLNANZdv2NineRcBXAqtZp/view?usp=sharing).
## Uses
This is intended to be used by Computer Science students at St. Olaf College. While it can be used broadly for general computer science questions, it has been finetuned to answer questions specific to the St. Olaf Computer Science program.
## How to Get Started with the Model
Use the code below to get started with the model.
### Direct Use With Transformers Library
#### Use a pipeline as a high-level helper
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="band2001/stolaf-angora-3200")
```
#### Load model directly
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("band2001/stolaf-angora-3200")
model = AutoModelForCausalLM.from_pretrained("band2001/stolaf-angora-3200", device_map="auto")
input_ids = tokenizer("YOUR PROMPT HERE", return_tensors="pt").to("YOUR DEVICE IF USING GPU ACCELERATION")
outputs = model.generate(**input_ids, max_new_tokens=256)
decoded_output = tokenizer.decode(outputs[0])
```
### Direct Use With MLX Library
Note MLX can only be used with Apple Silicon Macs. It is also recommended to use one of their Max series chips or higher.
```python
from mlx_lm import load, generate
def format_prompt(prompt, system_prompt = "YOUR SYSTEM PROMPT"):
return """<bos><start_of_turn>user
## Instructions
{}
## User
{}<end_of_turn>
<start_of_turn>model
""".format(system_prompt, prompt)
model, tokenizer = load("band2001/stolaf-angora-3200")
prompt = format_prompt("YOUR PROMPT HERE")
decoded_output = generate(
model,
tokenizer,
prompt=prompt,
verbose=True,
temp=0.0,
max_tokens=256,
)
```
### Out-of-Scope Use
Outside of using this model to ask questions about computer science topics (generally and specific to St. Olaf College), this model should not be used for other inference. Asking questions about other topics will likely yield answers; however, they have not been fine-tuned and will most likely contain errors and/or could potentially include offensive content.
## Bias, Risks, and Limitations
As we created the fine-tuning dataset from scratch, it is relatively limited compared to the overall size of the model. Our dataset has about 2000 observations, while the model has roughly 8.5B parameters. So while our dataset had a noticeable effect on the tuning of this model, it still will fall back on other knowledge occasionally and provide partially incorrect answers for St. Olaf specific questions.
Also note the limitations present in the [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) model and assume they are present in this model as well.
## Training Details
### Training Data
The training data can be found in the St. Olaf Angora Dataset ([band2001/stolaf-angora](https://huggingface.co/datasets/band2001/stolaf-angora)).
### Training Procedure
To train the model, the data needs to be in the following format. Note the data in [band2001/stolaf-angora](https://huggingface.co/datasets/band2001/stolaf-angora) already is.
```
<bos><start_of_turn>user
## Instructions
system prompt goes here
## User
prompt/query goes here<end_of_turn>
<start_of_turn>model
model response here (put a response here for tuning purposes)<end_of_turn><eos>
```
Once the data is in the correct format, QLoRA is recommended. The model can be fine-tuned either using mlx-lm and mps (to tune on an Apple Silicon machine) or a bitsandbytes configuration and cuda (to tune on a machine with Nvidia GPUs).
#### Preprocessing
To preprocess your data to be in the correct format outlined above, you can use the following helper function:
```python
def generate_prompt(entry, system_prompt = SYSTEM_PROMPT):
'''
This function formats a question/answer pair to gemma's chat template.
:param: entry - a dictionary with an instruction and a response
:param: system_prompt: the system prompt to be used
:return: the formated string for gemma's chat template
'''
return """<bos><start_of_turn>user
## Instructions
{}
## User
{}<end_of_turn>
<start_of_turn>model
{}<end_of_turn><eos>""".format(system_prompt, entry["instruction"], entry["response"])
```
When trying to use inference with this model, you can format the user's query using this helper function:
```python
def format_prompt(prompt, system_prompt = SYSTEM_PROMPT):
'''
This function formats a question to gemma's chat template.
:param: prompt - a string with the user's query
:param: system_prompt: the system prompt to be used
:return: the formated string for gemma's chat template
'''
return """<bos><start_of_turn>user
## Instructions
{}
## User
{}<end_of_turn>
<start_of_turn>model
""".format(system_prompt, prompt)
```
#### Training Process
The MLX LoRA fine-tuning approach was used. You can learn more about [MLX LoRA here](https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md). The Gemma-7b-it was loaded in without any conversion. The default `batch_size = 16` was chosen and to reach a 3200 iteration fine-tuned model the model was tuned with 800 iterations four times. Once the fine-tuned weights were created, the model was fused using MLX's fuse functionality. You can learn more about [fusing with MLX here](https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md#Fuse-and-Upload). One important change made when fusing with MLX was to change some of the MLX package code to include `"format":"pt"` in the metadata so this model can be used with the transformers library. To do that, the following was done: you can tweak the library code like below in <path_to_your_site-packages>/mlx_lm/utils.py by replacing `mx.save_safetensors(str(shard_path), shard, metadata={"format":"mlx"})` with `mx.save_safetensors(str(shard_path), shard, metadata={"format":"pt"})` to output fused weights with the metadata attribute. Special thanks to [Alexweberk's guide on GitHub](https://gist.github.com/alexweberk/635431b5c5773efd6d1755801020429f) to help solve this issue. Finally, the fused model was uploaded to this HuggingFace repo!
If you look at the GitHub repo for this project, mlx_lora.sh includes the command used for the LoRA fine-tuning, mlx_fuse.sh includes the command for the model fusing, and mlx_upload.sh includes the upload command. There is additionally an optional mlx_convert.sh for converting the Google Gemma 7b-it model before fine-tuning if desired.
## Evaluation
Testing loss and perplexity were the two metrics used to evaluate the Angora models. A summary of the results for all the different iteration models is included below.
### Results
| Number of iterations | Testing Loss | Perplexity |
|:----------|:----------|:---------|
|800 | 0.569 | 1.766 |
| 1600 | 0.302 | 1.352 |
| 2400 | 0.225 | 1.252 |
| 3200 | 0.185 | 1.203 |
| 4000 | 0.170 | 1.185 |
### Testing Data
The testing data is available [here](https://huggingface.co/datasets/band2001/stolaf-angora/viewer/default/test).
## Model Card Contact
Ben Anderson - [ander6@stolaf.edu](mailto:ander6@stolaf.edu)
Keegan Murray - [murray7@stolaf.edu](mailto:murray7@stolaf.edu)
|
stulcrad/CNEC_1_1_robeczech-base
|
stulcrad
| 2024-04-25T15:41:43Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:ufal/robeczech-base",
"base_model:finetune:ufal/robeczech-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-04-23T22:08:33Z |
---
license: cc-by-nc-sa-4.0
base_model: ufal/robeczech-base
tags:
- generated_from_trainer
datasets:
- cnec
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: CNEC_1_1_robeczech-base
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cnec
type: cnec
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.8579982891360137
- name: Recall
type: recall
value: 0.8856512141280353
- name: F1
type: f1
value: 0.8716054746904193
- name: Accuracy
type: accuracy
value: 0.9511284046692607
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_1_1_robeczech-base
This model is a fine-tuned version of [ufal/robeczech-base](https://huggingface.co/ufal/robeczech-base) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3233
- Precision: 0.8580
- Recall: 0.8857
- F1: 0.8716
- Accuracy: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3724 | 3.41 | 2000 | 0.3332 | 0.7990 | 0.8230 | 0.8108 | 0.9376 |
| 0.1863 | 6.81 | 4000 | 0.2656 | 0.8515 | 0.8636 | 0.8575 | 0.9455 |
| 0.1109 | 10.22 | 6000 | 0.2575 | 0.8505 | 0.8737 | 0.8619 | 0.9493 |
| 0.068 | 13.63 | 8000 | 0.2804 | 0.8567 | 0.8790 | 0.8677 | 0.9503 |
| 0.0466 | 17.04 | 10000 | 0.2952 | 0.8573 | 0.8830 | 0.8699 | 0.9498 |
| 0.0305 | 20.44 | 12000 | 0.2992 | 0.8618 | 0.8865 | 0.8740 | 0.9520 |
| 0.0231 | 23.85 | 14000 | 0.3272 | 0.8567 | 0.8843 | 0.8703 | 0.9512 |
| 0.02 | 27.26 | 16000 | 0.3233 | 0.8580 | 0.8857 | 0.8716 | 0.9511 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
band2001/stolaf-angora-4000
|
band2001
| 2024-04-25T15:38:19Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"dataset:band2001/stolaf-angora",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T01:57:48Z |
---
license: mit
datasets:
- band2001/stolaf-angora
---
# Model Card for Angora-4000
<!-- Provide a quick summary of what the model is/does. -->
This model has been created to help computer science students at St. Olaf College (Northfield, MN) answer questions about fundamental CS principles as well as questions about the specific technical stacks and procedures St. Olaf Computer Science uses.
## Angora-4000 Details
This model is built off of [Google's Gemma 7b-it](https://huggingface.co/google/gemma-7b-it) model. It was fine tuned with a dataset created with the purpose of addressing St. Olaf specific Computer Science questions. Some of these questions reference the specific instance of git the institution uses or address steps to declare the computer science major. This model was fine-tuned using MLX on an Apple M3 Max Chip. This model was trained for 4000 iterations using LoRA as the method for finetuning.
- **Developed by:** Ben Anderson & Keegan Murray
- **Funded by:** St. Olaf College MSCS Department
- **Model type:** Generative
- **License:** MIT
- **Finetuned from model:** [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it)
<!-- Provide the basic links for the model. -->
- **Repository:** See the GitHub repository [here](https://github.com/band2001/stolaf-angora)
- **Paper:** Coming soon...
- **Demo:** A video demo is available [here](https://drive.google.com/file/d/1iwThVj88FTgLNANZdv2NineRcBXAqtZp/view?usp=sharing).
## Uses
This is intended to be used by Computer Science students at St. Olaf College. While it can be used broadly for general computer science questions, it has been finetuned to answer questions specific to the St. Olaf Computer Science program.
## How to Get Started with the Model
Use the code below to get started with the model.
### Direct Use With Transformers Library
#### Use a pipeline as a high-level helper
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="band2001/stolaf-angora-4000")
```
#### Load model directly
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("band2001/stolaf-angora-4000")
model = AutoModelForCausalLM.from_pretrained("band2001/stolaf-angora-4000", device_map="auto")
input_ids = tokenizer("YOUR PROMPT HERE", return_tensors="pt").to("YOUR DEVICE IF USING GPU ACCELERATION")
outputs = model.generate(**input_ids, max_new_tokens=256)
decoded_output = tokenizer.decode(outputs[0])
```
### Direct Use With MLX Library
Note MLX can only be used with Apple Silicon Macs. It is also recommended to use one of their Max series chips or higher.
```python
from mlx_lm import load, generate
def format_prompt(prompt, system_prompt = "YOUR SYSTEM PROMPT"):
return """<bos><start_of_turn>user
## Instructions
{}
## User
{}<end_of_turn>
<start_of_turn>model
""".format(system_prompt, prompt)
model, tokenizer = load("band2001/stolaf-angora-4000")
prompt = format_prompt("YOUR PROMPT HERE")
decoded_output = generate(
model,
tokenizer,
prompt=prompt,
verbose=True,
temp=0.0,
max_tokens=256,
)
```
### Out-of-Scope Use
Outside of using this model to ask questions about computer science topics (generally and specific to St. Olaf College), this model should not be used for other inference. Asking questions about other topics will likely yield answers; however, they have not been fine-tuned and will most likely contain errors and/or could potentially include offensive content.
## Bias, Risks, and Limitations
As we created the fine-tuning dataset from scratch, it is relatively limited compared to the overall size of the model. Our dataset has about 2000 observations, while the model has roughly 8.5B parameters. So while our dataset had a noticeable effect on the tuning of this model, it still will fall back on other knowledge occasionally and provide partially incorrect answers for St. Olaf specific questions.
Also note the limitations present in the [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) model and assume they are present in this model as well.
## Training Details
### Training Data
The training data can be found in the St. Olaf Angora Dataset ([band2001/stolaf-angora](https://huggingface.co/datasets/band2001/stolaf-angora)).
### Training Procedure
To train the model, the data needs to be in the following format. Note the data in [band2001/stolaf-angora](https://huggingface.co/datasets/band2001/stolaf-angora) already is.
```
<bos><start_of_turn>user
## Instructions
system prompt goes here
## User
prompt/query goes here<end_of_turn>
<start_of_turn>model
model response here (put a response here for tuning purposes)<end_of_turn><eos>
```
Once the data is in the correct format, QLoRA is recommended. The model can be fine-tuned either using mlx-lm and mps (to tune on an Apple Silicon machine) or a bitsandbytes configuration and cuda (to tune on a machine with Nvidia GPUs).
#### Preprocessing
To preprocess your data to be in the correct format outlined above, you can use the following helper function:
```python
def generate_prompt(entry, system_prompt = SYSTEM_PROMPT):
'''
This function formats a question/answer pair to gemma's chat template.
:param: entry - a dictionary with an instruction and a response
:param: system_prompt: the system prompt to be used
:return: the formated string for gemma's chat template
'''
return """<bos><start_of_turn>user
## Instructions
{}
## User
{}<end_of_turn>
<start_of_turn>model
{}<end_of_turn><eos>""".format(system_prompt, entry["instruction"], entry["response"])
```
When trying to use inference with this model, you can format the user's query using this helper function:
```python
def format_prompt(prompt, system_prompt = SYSTEM_PROMPT):
'''
This function formats a question to gemma's chat template.
:param: prompt - a string with the user's query
:param: system_prompt: the system prompt to be used
:return: the formated string for gemma's chat template
'''
return """<bos><start_of_turn>user
## Instructions
{}
## User
{}<end_of_turn>
<start_of_turn>model
""".format(system_prompt, prompt)
```
#### Training Process
The MLX LoRA fine-tuning approach was used. You can learn more about [MLX LoRA here](https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md). The Gemma-7b-it was loaded in without any conversion. The default `batch_size = 16` was chosen and to reach a 4000 iteration fine-tuned model the model was tuned with 800 iterations five times. Once the fine-tuned weights were created, the model was fused using MLX's fuse functionality. You can learn more about [fusing with MLX here](https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md#Fuse-and-Upload). One important change made when fusing with MLX was to change some of the MLX package code to include `"format":"pt"` in the metadata so this model can be used with the transformers library. To do that, the following was done: you can tweak the library code like below in <path_to_your_site-packages>/mlx_lm/utils.py by replacing `mx.save_safetensors(str(shard_path), shard, metadata={"format":"mlx"})` with `mx.save_safetensors(str(shard_path), shard, metadata={"format":"pt"})` to output fused weights with the metadata attribute. Special thanks to [Alexweberk's guide on GitHub](https://gist.github.com/alexweberk/635431b5c5773efd6d1755801020429f) to help solve this issue. Finally, the fused model was uploaded to this HuggingFace repo!
If you look at the GitHub repo for this project, mlx_lora.sh includes the command used for the LoRA fine-tuning, mlx_fuse.sh includes the command for the model fusing, and mlx_upload.sh includes the upload command. There is additionally an optional mlx_convert.sh for converting the Google Gemma 7b-it model before fine-tuning if desired.
## Evaluation
Testing loss and perplexity were the two metrics used to evaluate the Angora models. A summary of the results for all the different iteration models is included below.
### Results
| Number of iterations | Testing Loss | Perplexity |
|:----------|:----------|:---------|
|800 | 0.569 | 1.766 |
| 1600 | 0.302 | 1.352 |
| 2400 | 0.225 | 1.252 |
| 3200 | 0.185 | 1.203 |
| 4000 | 0.170 | 1.185 |
### Testing Data
The testing data is available [here](https://huggingface.co/datasets/band2001/stolaf-angora/viewer/default/test).
## Model Card Contact
Ben Anderson - [ander6@stolaf.edu](mailto:ander6@stolaf.edu)
Keegan Murray - [murray7@stolaf.edu](mailto:murray7@stolaf.edu)
|
gboateng/adom-min-v1_model
|
gboateng
| 2024-04-25T15:36:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T15:35:59Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** gboateng
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LaurensVdP/Mistral-7B-Instruct-v0.2-Q8_0-GGUF
|
LaurensVdP
| 2024-04-25T15:34:35Z | 1 | 0 | null |
[
"gguf",
"finetuned",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-04-25T15:34:11Z |
---
license: apache-2.0
tags:
- finetuned
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
inference: true
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# LaurensVdP/Mistral-7B-Instruct-v0.2-Q8_0-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo LaurensVdP/Mistral-7B-Instruct-v0.2-Q8_0-GGUF --model mistral-7b-instruct-v0.2.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo LaurensVdP/Mistral-7B-Instruct-v0.2-Q8_0-GGUF --model mistral-7b-instruct-v0.2.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-7b-instruct-v0.2.Q8_0.gguf -n 128
```
|
yamaguchi-kota/gemma-medical_qa-Finetune
|
yamaguchi-kota
| 2024-04-25T15:26:14Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T15:23:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
smacky42/sn17-6-2
|
smacky42
| 2024-04-25T15:25:45Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2024-04-25T15:23:30Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ornelas7/code-search-net-tokenizer
|
Ornelas7
| 2024-04-25T15:22:26Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T15:22:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tutuhu/shanshui3
|
tutuhu
| 2024-04-25T15:22:03Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T11:33:00Z |
---
license: other
license_name: open
license_link: LICENSE
---
|
rwr20/ppo-LunarLander-v2
|
rwr20
| 2024-04-25T15:12:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-18T13:25:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.22 +/- 13.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eshan292/custom-deter
|
eshan292
| 2024-04-25T15:06:51Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-04-23T12:21:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaarsen/distilroberta-base-nli-adaptive-layer
|
tomaarsen
| 2024-04-25T15:04:31Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"loss:AdaptiveLayerLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2402.14776",
"arxiv:1705.00652",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-04-25T15:02:05Z |
---
language:
- en
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- loss:AdaptiveLayerLoss
- loss:MultipleNegativesRankingLoss
base_model: distilbert/distilroberta-base
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: Certainly.
sentences:
- '''Of course.'''
- The idea is a good one.
- the woman is asleep at home
- source_sentence: He walked.
sentences:
- The man was walking.
- The people are running.
- The women are making pizza.
- source_sentence: Double pig.
sentences:
- Ah, triple pig!
- He had no real answer.
- Do you not know?
- source_sentence: Very simply.
sentences:
- Not complicatedly.
- People are on a beach.
- The man kicks the umpire.
- source_sentence: Introduction
sentences:
- Analytical Perspectives.
- A man reads the paper.
- No one wanted Singapore.
pipeline_tag: sentence-similarity
co2_eq_emissions:
emissions: 94.69690706493431
energy_consumed: 0.24362341090329948
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.849
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: SentenceTransformer based on distilbert/distilroberta-base
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.845554152020916
name: Pearson Cosine
- type: spearman_cosine
value: 0.8486455482928023
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8475103134032791
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8505660318245544
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8494883021932786
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8526835635349959
name: Spearman Euclidean
- type: pearson_dot
value: 0.7866563719943611
name: Pearson Dot
- type: spearman_dot
value: 0.7816258810453734
name: Spearman Dot
- type: pearson_max
value: 0.8494883021932786
name: Pearson Max
- type: spearman_max
value: 0.8526835635349959
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.8182808182081737
name: Pearson Cosine
- type: spearman_cosine
value: 0.8148039503538166
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8132463174874629
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8088248622918064
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8148200486691981
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8105059611031759
name: Spearman Euclidean
- type: pearson_dot
value: 0.7499699563291125
name: Pearson Dot
- type: spearman_dot
value: 0.7350068244681712
name: Spearman Dot
- type: pearson_max
value: 0.8182808182081737
name: Pearson Max
- type: spearman_max
value: 0.8148039503538166
name: Spearman Max
---
# SentenceTransformer based on distilbert/distilroberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) <!-- at revision fb53ab8802853c8e4fbdbcd0529f21fc6f459b2b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/distilroberta-base-nli-adaptive-layer")
# Run inference
sentences = [
'Introduction',
'Analytical Perspectives.',
'A man reads the paper.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8456 |
| **spearman_cosine** | **0.8486** |
| pearson_manhattan | 0.8475 |
| spearman_manhattan | 0.8506 |
| pearson_euclidean | 0.8495 |
| spearman_euclidean | 0.8527 |
| pearson_dot | 0.7867 |
| spearman_dot | 0.7816 |
| pearson_max | 0.8495 |
| spearman_max | 0.8527 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8183 |
| **spearman_cosine** | **0.8148** |
| pearson_manhattan | 0.8132 |
| spearman_manhattan | 0.8088 |
| pearson_euclidean | 0.8148 |
| spearman_euclidean | 0.8105 |
| pearson_dot | 0.75 |
| spearman_dot | 0.735 |
| pearson_max | 0.8183 |
| spearman_max | 0.8148 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### sentence-transformers/all-nli
* Dataset: [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [e587f0c](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/e587f0c494c20fb9a1853cdfb43d42576d60a7e5)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.38 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.8 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3
}
```
### Evaluation Dataset
#### sentence-transformers/all-nli
* Dataset: [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [e587f0c](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/e587f0c494c20fb9a1853cdfb43d42576d60a7e5)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.02 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.81 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.37 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: False
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: None
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:------:|:-----------------------:|:------------------------:|
| 0.0229 | 100 | 7.0517 | 3.9378 | 0.7889 | - |
| 0.0459 | 200 | 4.4877 | 3.8105 | 0.7906 | - |
| 0.0688 | 300 | 4.0315 | 3.6401 | 0.7966 | - |
| 0.0918 | 400 | 3.822 | 3.3537 | 0.7883 | - |
| 0.1147 | 500 | 3.0608 | 2.5975 | 0.7973 | - |
| 0.1376 | 600 | 2.6304 | 2.3956 | 0.7943 | - |
| 0.1606 | 700 | 2.7723 | 2.0379 | 0.8009 | - |
| 0.1835 | 800 | 2.3556 | 1.9645 | 0.7984 | - |
| 0.2065 | 900 | 2.4998 | 1.9086 | 0.8017 | - |
| 0.2294 | 1000 | 2.1834 | 1.8400 | 0.7973 | - |
| 0.2524 | 1100 | 2.2793 | 1.5831 | 0.8102 | - |
| 0.2753 | 1200 | 2.1042 | 1.6485 | 0.8004 | - |
| 0.2982 | 1300 | 2.1365 | 1.7084 | 0.8013 | - |
| 0.3212 | 1400 | 2.0096 | 1.5520 | 0.8064 | - |
| 0.3441 | 1500 | 2.0492 | 1.4917 | 0.8084 | - |
| 0.3671 | 1600 | 1.8764 | 1.5447 | 0.8018 | - |
| 0.3900 | 1700 | 1.8611 | 1.5480 | 0.8046 | - |
| 0.4129 | 1800 | 1.972 | 1.5353 | 0.8075 | - |
| 0.4359 | 1900 | 1.8062 | 1.4633 | 0.8039 | - |
| 0.4588 | 2000 | 1.8565 | 1.4213 | 0.8027 | - |
| 0.4818 | 2100 | 1.8852 | 1.3860 | 0.8002 | - |
| 0.5047 | 2200 | 1.7939 | 1.5468 | 0.7910 | - |
| 0.5276 | 2300 | 1.7398 | 1.6041 | 0.7888 | - |
| 0.5506 | 2400 | 1.8535 | 1.5791 | 0.7949 | - |
| 0.5735 | 2500 | 1.8486 | 1.4871 | 0.7951 | - |
| 0.5965 | 2600 | 1.7379 | 1.5427 | 0.8019 | - |
| 0.6194 | 2700 | 1.7325 | 1.4585 | 0.8087 | - |
| 0.6423 | 2800 | 1.7664 | 1.5264 | 0.7965 | - |
| 0.6653 | 2900 | 1.7517 | 1.6344 | 0.7930 | - |
| 0.6882 | 3000 | 1.8329 | 1.4947 | 0.8008 | - |
| 0.7112 | 3100 | 1.7206 | 1.4917 | 0.8089 | - |
| 0.7341 | 3200 | 1.7138 | 1.4185 | 0.8065 | - |
| 0.7571 | 3300 | 1.3705 | 1.2040 | 0.8446 | - |
| 0.7800 | 3400 | 1.1289 | 1.1363 | 0.8447 | - |
| 0.8029 | 3500 | 1.0174 | 1.1049 | 0.8464 | - |
| 0.8259 | 3600 | 1.0188 | 1.0362 | 0.8466 | - |
| 0.8488 | 3700 | 0.9841 | 1.1391 | 0.8470 | - |
| 0.8718 | 3800 | 0.8466 | 1.0116 | 0.8485 | - |
| 0.8947 | 3900 | 0.9268 | 1.1323 | 0.8488 | - |
| 0.9176 | 4000 | 0.8686 | 1.0296 | 0.8495 | - |
| 0.9406 | 4100 | 0.9255 | 1.1737 | 0.8484 | - |
| 0.9635 | 4200 | 0.7991 | 1.0609 | 0.8486 | - |
| 0.9865 | 4300 | 0.8431 | 0.9976 | 0.8486 | - |
| 1.0 | 4359 | - | - | - | 0.8148 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.244 kWh
- **Carbon Emitted**: 0.095 kg of CO2
- **Hours Used**: 0.849 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.0.0.dev0
- Transformers: 4.41.0.dev0
- PyTorch: 2.3.0+cu121
- Accelerate: 0.26.1
- Datasets: 2.18.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### AdaptiveLayerLoss
```bibtex
@misc{li20242d,
title={2D Matryoshka Sentence Embeddings},
author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
year={2024},
eprint={2402.14776},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
i-pj/a2c-PandaReachDense-v3
|
i-pj
| 2024-04-25T15:04:25Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-25T14:59:37Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.36 +/- 0.17
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LadislavVasina1/whisper-base-cs-cv11-train-noaug-test-noaug
|
LadislavVasina1
| 2024-04-25T14:59:00Z | 86 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-10T18:25:50Z |
---
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: cs
split: None
args: cs
metrics:
- name: Wer
type: wer
value: 35.16226470696578
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3770
- Wer: 35.1623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3007 | 1.4440 | 1000 | 0.4410 | 41.9825 |
| 0.1741 | 2.8881 | 2000 | 0.3800 | 36.4994 |
| 0.0971 | 4.3321 | 3000 | 0.3751 | 35.3022 |
| 0.079 | 5.7762 | 4000 | 0.3770 | 35.1623 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
akumaburn/Open_Orca_Llama-3-8B-1K
|
akumaburn
| 2024-04-25T14:51:51Z | 150 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"dataset:Open-Orca/OpenOrca",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-22T21:41:55Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
datasets:
- Open-Orca/OpenOrca
---
# Open Orca Llama 3 8B
- **Fine Tuned using dataset:** https://huggingface.co/datasets/Open-Orca/OpenOrca
- **Step Count:** 1000
- **Batch Size:** 2
- **Gradient Accumulation Steps:** 4
- **Context Size:** 8192
- **Num examples:** 4,233,923
- **Trainable Parameters:** 41,943,040
- **Learning Rate:** 0.0625
- **Training Loss:** 1.090800
- **Fined Tuned using:** Google Colab Pro (Nvidia L4 runtime)
- **Developed by:** akumaburn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
- **Prompt Format:** Alpaca (https://libertai.io/apis/text-generation/prompting.html)
Some GGUF quantizations are included as well.
mistral-7b-openorca.Q8_0.gguf:
- **MMLU-Test:** Final result: **41.5836 +/- 0.4174**
- **Arc-Easy:** Final result: 72.6316 +/- 1.8691
- **Truthful QA:** Final result: **32.0685 +/- 1.6339**
- **Arc-Challenge:** Final result: **48.8294 +/- 2.8956**
llama-3-8b-bnb-4bit.Q8_0.gguf:
- **MMLU-Test:** Final result: 40.4074 +/- 0.4156
- **Arc-Easy:** Final result: 73.8596 +/- 1.8421
- **Truthful QA:** Final result: 26.6830 +/- 1.5484
- **Arc-Challenge:** Final result: 46.8227 +/- 2.8906
**Open_Orca_Llama-3-8B-unsloth.Q8_0.gguf**:
- **MMLU-Test:** Final result: 39.3818 +/- 0.4138
- **Arc-Easy:** Final result: 67.3684 +/- 1.9656
- **Truthful QA:** Final result: 29.0086 +/- 1.5886
- **Arc-Challenge:** Final result: 42.1405 +/- 2.8604
Meta-Llama-3-8B.Q8_0.gguf:
- **MMLU-Test:** Final result: 40.8664 +/- 0.4163
- **Arc-Easy:** Final result: **74.3860 +/- 1.8299**
- **Truthful QA:** Final result: 28.6414 +/- 1.5826
- **Arc-Challenge:** Final result: 47.1572 +/- 2.8917
Llama.cpp Options For Testing:
--samplers "tfs;typical;temp" --draft 32 --ctx-size 8192 --temp 0.82 --tfs 0.8 --typical 1.1 --repeat-last-n 512 --batch-size 8192 --repeat-penalty 1.0 --n-gpu-layers 100 --threads 12
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
akumaburn/Alpaca-Llama-3-8B-GGUF
|
akumaburn
| 2024-04-25T14:51:37Z | 35 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"dataset:yahma/alpaca-cleaned",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T06:23:18Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
datasets:
- yahma/alpaca-cleaned
---
# Alpaca-Llama-3-8B
- **Fine Tuned using dataset:** https://huggingface.co/datasets/yahma/alpaca-cleaned
- **Epoch Count:** 1
- **Step Count:** 6,470/6,470
- **Batch Size:** 2
- **Gradient Accumulation Steps:** 4
- **Context Size:** 8192
- **Num examples:** 51,760
- **Trainable Parameters:** 41,943,040
- **Learning Rate:** 0.00001
- **Training Loss:** 0.960000
- **Fined Tuned using:** Google Colab Pro (Nvidia T4 runtime)
- **Developed by:** akumaburn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
- **Prompt Format:** Alpaca (https://libertai.io/apis/text-generation/prompting.html)
- **Chai ELO:** 1146.84 (https://console.chaiverse.com/models/akumaburn-alpaca-llama-3-8b_v1)
Full model can be found in https://huggingface.co/akumaburn/Alpaca-Llama-3-8B
mistral-7b-openorca.Q8_0.gguf:
- **MMLU-Test:** Final result: **41.5836 +/- 0.4174**
- **Arc-Easy:** Final result: 72.6316 +/- 1.8691
- **Truthful QA:** Final result: **32.0685 +/- 1.6339**
- **Arc-Challenge:** Final result: 48.8294 +/- 2.8956
llama-3-8b-bnb-4bit.Q8_0.gguf:
- **MMLU-Test:** Final result: 40.4074 +/- 0.4156
- **Arc-Easy:** Final result: 73.8596 +/- 1.8421
- **Truthful QA:** Final result: 26.6830 +/- 1.5484
- **Arc-Challenge:** Final result: 46.8227 +/- 2.8906
Open_Orca_Llama-3-8B-unsloth.Q8_0.gguf:
- **MMLU-Test:** Final result: 39.3818 +/- 0.4138
- **Arc-Easy:** Final result: 67.3684 +/- 1.9656
- **Truthful QA:** Final result: 29.0086 +/- 1.5886
- **Arc-Challenge:** Final result: 42.1405 +/- 2.8604
**Alpaca-Llama-3-8B-GGUF-unsloth.Q8_0.gguf**:
- **MMLU-Test:** Final result: 40.6441 +/- 0.4160
- **Arc-Easy:** Final result: **77.5439 +/- 1.7494**
- **Truthful QA:** Final result: 29.7430 +/- 1.6003
- **Arc-Challenge:** Final result: **50.5017 +/- 2.8963**
Meta-Llama-3-8B.Q8_0.gguf:
- **MMLU-Test:** Final result: 40.8664 +/- 0.4163
- **Arc-Easy:** Final result: 74.3860 +/- 1.8299
- **Truthful QA:** Final result: 28.6414 +/- 1.5826
- **Arc-Challenge:** Final result: 47.1572 +/- 2.8917
Llama.cpp Options For Testing:
--samplers "tfs;typical;temp" --draft 32 --ctx-size 8192 --temp 0.82 --tfs 0.8 --typical 1.1 --repeat-last-n 512 --batch-size 8192 --repeat-penalty 1.0 --n-gpu-layers 100 --threads 12
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sohamslc5/PHI3
|
sohamslc5
| 2024-04-25T14:51:31Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"dataset:sohamslc5/curr1",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T14:47:00Z |
---
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
base_model: "microsoft/Phi-3-mini-4k-instruct"
datasets:
- sohamslc5/curr1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xtuner/llava-phi-3-mini-xtuner
|
xtuner
| 2024-04-25T14:46:29Z | 11 | 4 |
xtuner
|
[
"xtuner",
"safetensors",
"llama",
"image-text-to-text",
"conversational",
"dataset:Lin-Chen/ShareGPT4V",
"region:us"
] |
image-text-to-text
| 2024-04-25T04:50:11Z |
---
datasets:
- Lin-Chen/ShareGPT4V
pipeline_tag: image-text-to-text
library_name: xtuner
---
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[](https://github.com/InternLM/xtuner)
</div>
## Model
llava-phi-3-mini is a LLaVA model fine-tuned from [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) and [InternVL-SFT](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#prepare-training-datasets) by [XTuner](https://github.com/InternLM/xtuner).
**Note: This model is in XTuner LLaVA format.**
Resources:
- GitHub: [xtuner](https://github.com/InternLM/xtuner)
- HuggingFace LLaVA format model: [xtuner/llava-phi-3-mini-hf](https://huggingface.co/xtuner/llava-phi-3-mini-hf)
- Official LLaVA format model: [xtuner/llava-phi-3-mini](https://huggingface.co/xtuner/llava-phi-3-mini)
- GGUF LLaVA model: [xtuner/llava-phi-3-mini-gguf](https://huggingface.co/xtuner/llava-phi-3-mini-gguf)
## Details
| Model | Visual Encoder | Projector | Resolution | Pretraining Strategy | Fine-tuning Strategy | Pretrain Dataset | Fine-tune Dataset | Pretrain Epoch | Fine-tune Epoch |
| :-------------------- | ------------------: | --------: | ---------: | ---------------------: | ------------------------: | ------------------------: | -----------------------: | -------------- | --------------- |
| LLaVA-v1.5-7B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, Frozen ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) | 1 | 1 |
| LLaVA-Llama-3-8B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) | 1 | 1 |
| LLaVA-Llama-3-8B-v1.1 | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | ShareGPT4V-PT (1246K) | InternVL-SFT (1268K) | 1 | 1 |
| **LLaVA-Phi-3-mini** | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, Full ViT | ShareGPT4V-PT (1246K) | InternVL-SFT (1268K) | 1 | 2 |
## Results
<div align="center">
<img src="https://github.com/InternLM/xtuner/assets/36994684/78524f65-260d-4ae3-a687-03fc5a19dcbb" alt="Image" width=500" />
</div>
| Model | MMBench Test (EN) | MMMU Val | SEED-IMG | AI2D Test | ScienceQA Test | HallusionBench aAcc | POPE | GQA | TextVQA | MME | MMStar |
| :-------------------- | :---------------: | :-------: | :------: | :-------: | :------------: | :-----------------: | :--: | :--: | :-----: | :------: | :----: |
| LLaVA-v1.5-7B | 66.5 | 35.3 | 60.5 | 54.8 | 70.4 | 44.9 | 85.9 | 62.0 | 58.2 | 1511/348 | 30.3 |
| LLaVA-Llama-3-8B | 68.9 | 36.8 | 69.8 | 60.9 | 73.3 | 47.3 | 87.2 | 63.5 | 58.0 | 1506/295 | 38.2 |
| LLaVA-Llama-3-8B-v1.1 | 72.3 | 37.1 | 70.1 | 70.0 | 72.9 | 47.7 | 86.4 | 62.6 | 59.0 | 1469/349 | 45.1 |
| **LLaVA-Phi-3-mini** | 69.2 | 41.4 | 70.0 | 69.3 | 73.7 | 49.8 | 87.3 | 61.5 | 57.8 | 1477/313 | 43.7 |
## Quickstart
### Installation
```shell
pip install 'git+https://github.com/InternLM/xtuner.git#egg=xtuner[deepspeed]'
```
### Chat
```shell
xtuner chat xtuner/llava-phi-3-mini-xtuner \
--llava xtuner/llava-phi-3-mini-xtuner \
--prompt-template phi3_chat \
--image $IMAGE_PATH
```
### MMBench Evaluation
XTuner integrates the MMBench evaluation, and you can perform evaluations with the following command!
```bash
xtuner mmbench xtuner/llava-phi-3-mini-xtuner \
--llava xtuner/llava-phi-3-mini-xtuner \
--prompt-template phi3_chat \
--data-path $MMBENCH_DATA_PATH \
--work-dir $RESULT_PATH
```
After the evaluation is completed, if it's a development set, it will directly print out the results; If it's a test set, you need to submit `mmbench_result.xlsx` to the official MMBench for final evaluation to obtain precision results!
### Reproduce
Please refer to [docs](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336#readme).
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
```
|
dhrubochowdhury5758778/finetune-GPT2-IMDb
|
dhrubochowdhury5758778
| 2024-04-25T14:40:20Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-25T14:31:52Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetune-GPT2-IMDb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-GPT2-IMDb
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5398
- Accuracy: 0.909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|
stablediffusionapi/rev-anim
|
stablediffusionapi
| 2024-04-25T14:39:13Z | 56 | 3 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-24T15:59:18Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "rev-anim"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/rev-anim)
Model link: [View model](https://modelslab.com/models/rev-anim)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "rev-anim",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
zaq-hack/ChaoticSoliloquy-4x8B-bpw500-h6-exl2-rpcal
|
zaq-hack
| 2024-04-25T14:36:56Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-04-25T07:43:21Z |
---
license: llama3
language:
- en
tags:
- moe
---

(Maybe i'll change the waifu picture later)
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
[GGUF, Exl2](https://huggingface.co/collections/xxx777xxxASD/chaoticsoliloquy-4x8b-6628a759b5a60d8d3f51ed62)
### ChaoticSoliloquy-4x8B
```
base_model: jeiku_Chaos_RP_l3_8B
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B
- source_model: jeiku_Chaos_RP_l3_8B
- source_model: openlynn_Llama-3-Soliloquy-8B
- source_model: Sao10K_L3-Solana-8B-v1
```
## Models used
- [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B)
- [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B)
- [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
- [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
## Vision
[llama3_mmproj](https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj)

## Prompt format: Llama 3
|
BenjaminTT/NLPGroupProject-Finetune-bio-mobilebert-AL-Promt
|
BenjaminTT
| 2024-04-25T14:36:39Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mobilebert",
"multiple-choice",
"generated_from_trainer",
"base_model:nlpie/bio-mobilebert",
"base_model:finetune:nlpie/bio-mobilebert",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-04-25T14:24:15Z |
---
license: mit
base_model: nlpie/bio-mobilebert
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: NLPGroupProject-Finetune-bio-mobilebert-AL-Promt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLPGroupProject-Finetune-bio-mobilebert-AL-Promt
This model is a fine-tuned version of [nlpie/bio-mobilebert](https://huggingface.co/nlpie/bio-mobilebert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0324
- Accuracy: 0.742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.3121 | 250 | 0.8727 | 0.727 |
| 35.354 | 0.6242 | 500 | 0.7830 | 0.738 |
| 35.354 | 0.9363 | 750 | 0.7660 | 0.745 |
| 0.8233 | 1.2484 | 1000 | 0.9794 | 0.744 |
| 0.8233 | 1.5605 | 1250 | 0.8635 | 0.746 |
| 0.7285 | 1.8727 | 1500 | 0.6671 | 0.747 |
| 0.7285 | 2.1848 | 1750 | 1.0348 | 0.758 |
| 0.5734 | 2.4969 | 2000 | 1.0761 | 0.747 |
| 0.5734 | 2.8090 | 2250 | 1.0324 | 0.742 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
|
martins96/whisper-large-v3-test-15epochs
|
martins96
| 2024-04-25T14:36:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T14:36:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lightonai/mambaoutai
|
lightonai
| 2024-04-25T14:32:15Z | 27 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mamba",
"text-generation",
"conversational",
"fr",
"en",
"dataset:togethercomputer/RedPajama-Data-V2",
"dataset:stingning/ultrachat",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-18T16:30:03Z |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-V2
- stingning/ultrachat
language:
- fr
- en
metrics:
- accuracy
- perplexity
---
# Mambaoutai 1.6B
Mambaoutai is the result of all the experiments and training runs described in the [following blog post](https://www.lighton.ai/fr/blog/blog-4/passing-the-torch-training-a-mamba-model-for-smooth-handover-54), where all details about the model series is shared. Mambaoutai is series of small mamba checkpoints released for the community to explore, trained on French, English and code. We run two different decay phases with the WSD-scheduler, and release model checkpoints pretrained both with and without instruction data.
## Usage
You need to install `transformers` from `main` until `transformers=4.39.0` is released.
```bash
pip install git+https://github.com/huggingface/transformers@main
```
We also recommend you to install both `causal-conv1d` and `mamba-ssm` using:
```bash
pip install causal-conv1d>=1.2.0
pip install mamba-ssm>=1.2.0
```
If any of these two is not installed, the "eager" implementation will be used(not recommended). Otherwise the more optimised `CUDA` kernels will be used.
### Generation
Use this snippet of code to generate text from the model:
```python
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
import torch
if model_has_instruct_data:
# use chat tokens
prompt = ”<start_user>Tell me something about Paris.<end_message><start_assistant>”
else:
# prompt the non-instructed tuned model gently
prompt = ”This is a text about Paris. Paris is”
tokenizer = AutoTokenizer.from_pretrained("lightonai/mambaoutai")
model = MambaForCausalLM.from_pretrained("lightonai/mambaoutai")
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"]
out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))
```
### Training checkpoints
You can find some of the training checkpoints in the repo branch. On branch corresponding to the model at some point in time during training.
You can do inference with these training checkpoints by adding the `revision` parameter to the `from_pretrained` method.
For example, to load the model checkpoint after 30000 steps of pretraining, you can use the following code:
```python
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("lightonai/mambaoutai", revision="pre-30000")
model = MambaForCausalLM.from_pretrained("lightonai/mambaoutai", revision="pre-30000")
input_ids = tokenizer("What is a mamba?", return_tensors="pt")["input_ids"]
out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))
```
### On-device Inference
Since Mambaoutai is only 1.6B parameters, it can be run on a CPU with reasonable speed.
Here is an example of how to run it on llama.cpp:
```bash
# Clone llama.cpp repository and compile it from source
git clone https://github.com/ggerganov/llama.cpp\
cd llama.cpp
make
# Create a venv and install dependencies
conda create -n mamba-cpp python=3.10
conda activate mamba-cpp
pip install -r requirements/requirements-convert-hf-to-gguf.txt
# Download the weights, tokenizer, config, tokenizer_config and special_tokens_map from this repo and
# put them in a directory 'Mambaoutai/'
mkdir Mambaoutai
# Convert the weights to GGUF format
python convert-hf-to-gguf.py Mambaoutai
# Run inference with a prompt
./main -m Mambaoutai/ggml-model-f16.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 1
```
### Training Hardware
The model checkpoints with no instruction data have been fully trained on an NVIDIA DGX H100 provided by OVH Cloud, whereas the decay phases with instruction data have been carried out on an HPE Cray with 8xH100 on Orange Cloud Avenue.
The ablation experiments were conducted on 16 nodes(4xA100-40GB) on MeluXina.
### Model hyperparameters
More details about the model hyperparameters are given in the table below :
| Parameter | Value |
|-----------------------|----------|
| d_model | 2688 |
| n_layer | 28 |
| vocab_size | 65024 |
| context_len | 4096 |
| rms_norm | true |
| residual_in_fp32 | true |
| fused_add_norm | true |
| conv_kernel | 4 |
| d_inner | 5376 |
| state_size | 16 |
| dtype | bfloat16 |
| tie_word_embeddings | false |
| non embeddings params | 1.27B |
|
Sohaibsoussi/sementic-classification-of-movie-reviews
|
Sohaibsoussi
| 2024-04-25T14:23:43Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-04-25T13:59:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nm-testing/llama2.c-stories110M-pruned50-compressed-tensors
|
nm-testing
| 2024-04-25T14:21:21Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"nm-vllm",
"sparse",
"arxiv:2301.00774",
"base_model:Xenova/llama2.c-stories110M",
"base_model:finetune:Xenova/llama2.c-stories110M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T14:17:32Z |
---
base_model: Xenova/llama2.c-stories110M
inference: true
model_type: llama
quantized_by: mgoin
tags:
- nm-vllm
- sparse
---
## llama2.c-stories110M-pruned50
This repo contains model files for [llama2.c 110M tinystories](https://huggingface.co/Xenova/llama2.c-stories110M) optimized for [NM-vLLM](https://github.com/neuralmagic/nm-vllm), a high-throughput serving engine for compressed LLMs.
This model was pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
The weights for this model were saved using [compressed-tensors](https://github.com/neuralmagic/compressed-tensors/pull/30) library. The chosen compression is format bitmask-compression.
## Inference
Install [NM-vLLM](https://github.com/neuralmagic/nm-vllm) for fast inference and low memory-usage:
```bash
pip install nm-vllm[sparse]
```
Run in a Python pipeline for local inference:
```python
from vllm import LLM, SamplingParams
model = LLM("nm-testing/llama2.c-stories110M-pruned50", sparsity="sparse_w16a16")
prompt = "Hello my name is"
sampling_params = SamplingParams(max_tokens=100, temperature=0)
outputs = model.generate(prompt, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
```
## Prompt template
N/A
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
Install [SparseML](https://github.com/neuralmagic/sparseml):
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
```
Replace the recipe as you like and run this one-shot compression script to apply SparseGPT:
```python
import sparseml.transformers
original_model_name = "Xenova/llama2.c-stories110M"
calibration_dataset = "open_platypus"
output_directory = "output/"
recipe = """
test_stage:
obcq_modifiers:
SparseGPTModifier:
sparsity: 0.5
sequential_update: true
targets: ['re:model.layers.\d*$']
"""
# Apply SparseGPT to the model
sparseml.transformers.oneshot(
model=original_model_name,
dataset=calibration_dataset,
recipe=recipe,
output_dir=output_directory,
)
```
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
|
DBangshu/Gemma-2b
|
DBangshu
| 2024-04-25T14:20:05Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T14:16:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ketan3101/llama-3_8b_lora_model
|
Ketan3101
| 2024-04-25T14:19:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T14:18:58Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Ketan3101
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ihork/ds-math-7rl-ft3
|
ihork
| 2024-04-25T14:18:54Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/deepseek-math-7b-rl",
"base_model:finetune:deepseek-ai/deepseek-math-7b-rl",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T14:16:27Z |
---
license: other
base_model: deepseek-ai/deepseek-math-7b-rl
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: ds-math-7rl-ft3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ds-math-7rl-ft3
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 3000
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
happylayers/sc21
|
happylayers
| 2024-04-25T14:18:41Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T14:17:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
crystalkalem/Novakid_Pony-XL
|
crystalkalem
| 2024-04-25T14:18:25Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"license:afl-3.0",
"region:us"
] |
text-to-image
| 2024-04-25T14:14:32Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0N\0o\0v\0a\0k\0i\0d\0,\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0o\0m\0e\0g\0a\0 \0s\0y\0m\0b\0o\0l\0 \0o\0n\0 \0f\0a\0c\0e\0,\0 \0f\0a\0c\0e\0l\0e\0s\0s\0,\0 \0n\0o\0 \0e\0y\0e\0s\0,\0 \0n\0o\0 \0m\0o\0u\0t\0h\0,\0 \0n\0o\0 \0n\0o\0s\0e\0,\0 \0n\0o\0 \0f\0a\0c\0i\0a\0l\0 \0f\0e\0a\0t\0u\0r\0e\0s\0,\0 \0w\0h\0i\0t\0e\0 \0f\0i\0e\0r\0y\0 \0h\0a\0i\0r\0,\0 \0l\0o\0n\0g\0 \0f\0i\0e\0r\0y\0 \0h\0a\0i\0r\0,\0 \0b\0o\0d\0y\0 \0m\0a\0d\0e\0 \0o\0f\0 \0p\0l\0a\0s\0m\0a\0,\0 \0w\0h\0i\0t\0e\0 \0p\0l\0a\0s\0m\0a\0,\0 \0w\0h\0i\0t\0e\0 \0s\0k\0i\0n\0,\0 \0c\0o\0w\0b\0o\0y\0 \0h\0a\0t\0,\0 \0b\0l\0a\0c\0k\0 \0v\0e\0s\0t\0,\0 \0w\0h\0i\0t\0e\0 \0s\0h\0i\0r\0t\0,\0 \0j\0e\0a\0n\0s\0,\0 \0b\0r\0o\0w\0n\0 \0j\0a\0c\0k\0e\0t\0,\0 \0s\0t\0a\0n\0d\0i\0n\0g\0,\0 \0p\0o\0n\0y\0t\0a\0i\0l\0,\0 \0f\0a\0c\0i\0n\0g\0 \0v\0i\0e\0w\0e\0r\0,\0 \0f\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0c\0o\0l\0l\0a\0r\0b\0o\0n\0e\0,\0 \0r\0e\0d\0 \0n\0e\0c\0k\0e\0r\0c\0h\0i\0e\0f\0,\0 \0b\0l\0a\0c\0k\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0s\0i\0m\0p\0l\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0,\0"
output:
url: >-
images/DED619DB92F6A41F6EF6D105EB8C210DA8F7096618B8FBB05B0A46BB662C238C.jpeg
base_model: stablediffusionapi/pony-diffusion-v6-xl
instance_prompt: >-
Novakid, faceless, no eyes, no mouth, no nose, no facial features, long fiery
hair, body made of plasma
license: afl-3.0
---
# Novakid_Pony-XL
<Gallery />
## Model description
**Please post your creations! I love seeing the fruits of my hard work enjoyed!**
words used while training...
Novakid,
1boy, 1girl,
solo,
faceless, no eyes, no mouth, no nose, no facial features, long fiery hair, body made of plasma, cowboy shot, cowboy hat, cowboy boots, cowboy western, jeans, black leather jacket, brown coat, shirt under vest,
**Face symbol prompts per line.**
heart symbol on face,
x-cross symbol on face,
circle symbol on face,
star symbol on face,
4-point-compass symbol on face,
omega symbol on face,
Open-Centre-Cross
6-pointed-star symbol on face,
triangle symbol on face,
**Body color prompts are per line.**
blue fiery hair, blue plasma, blue skin,
green fiery hair, green plasma, green skin,
red fiery hair, red plasma, red skin,
white fiery hair, white plasma, white skin,
yellow fiery hair, yellow plasma, yellow skin,
## Trigger words
You should use `Novakid` to trigger the image generation.
You should use `faceless` to trigger the image generation.
You should use `no eyes` to trigger the image generation.
You should use `no mouth` to trigger the image generation.
You should use `no nose` to trigger the image generation.
You should use `no facial features` to trigger the image generation.
You should use `long fiery hair` to trigger the image generation.
You should use `body made of plasma` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/crystalkalem/Novakid_Pony-XL/tree/main) them in the Files & versions tab.
|
ShenaoZ/0.001_ablation_5iters_bs256_useresponse_iter_4
|
ShenaoZ
| 2024-04-25T14:16:48Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_5iters_bs256_useresponse_iter_3",
"base_model:finetune:ShenaoZ/0.001_ablation_5iters_bs256_useresponse_iter_3",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T13:27:37Z |
---
license: mit
base_model: ShenaoZ/0.001_ablation_5iters_bs256_useresponse_iter_3
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_ablation_5iters_bs256_useresponse_iter_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_5iters_bs256_useresponse_iter_4
This model is a fine-tuned version of [ShenaoZ/0.001_ablation_5iters_bs256_useresponse_iter_3](https://huggingface.co/ShenaoZ/0.001_ablation_5iters_bs256_useresponse_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
SanaFalakJ/my-awesome-model
|
SanaFalakJ
| 2024-04-25T14:15:43Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T13:55:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
raulgadea/ppo-LunarLander-v2
|
raulgadea
| 2024-04-25T14:14:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-25T14:14:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.93 +/- 13.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TinyPixel/try-1
|
TinyPixel
| 2024-04-25T14:14:26Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T14:13:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MY11111111/ppo-Pyramids
|
MY11111111
| 2024-04-25T14:07:41Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-04-25T14:04:41Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MY11111111/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HenryCai1129/adapter-toxic2nontoxic-100-50-0.0006
|
HenryCai1129
| 2024-04-25T14:07:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T14:07:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ninagroot/Baby-Llama-58M-RUN3_3
|
ninagroot
| 2024-04-25T14:05:18Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T14:05:06Z |
---
tags:
- generated_from_trainer
model-index:
- name: Baby-Llama-58M-RUN3_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baby-Llama-58M-RUN3_3
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 297.4542 | 1.0 | 12 | 250.9910 |
| 229.6338 | 2.0 | 24 | 208.3821 |
| 208.295 | 3.0 | 36 | 179.5238 |
| 129.018 | 4.0 | 48 | 112.9940 |
| 82.9929 | 5.0 | 60 | 74.3020 |
| 46.9522 | 6.0 | 72 | 42.2297 |
| 24.9202 | 7.0 | 84 | 23.4095 |
| 15.2942 | 8.0 | 96 | 13.3510 |
| 10.0619 | 9.0 | 108 | 9.7284 |
| 7.784 | 10.0 | 120 | 7.8737 |
| 6.4759 | 11.0 | 132 | 7.2488 |
| 6.1744 | 12.0 | 144 | 6.3695 |
| 5.4904 | 13.0 | 156 | 6.2293 |
| 5.4665 | 14.0 | 168 | 5.8846 |
| 4.731 | 15.0 | 180 | 5.8094 |
| 4.7619 | 16.0 | 192 | 5.4680 |
| 4.6858 | 17.0 | 204 | 5.4562 |
| 4.594 | 18.0 | 216 | 5.2367 |
| 4.7173 | 19.0 | 228 | 5.1584 |
| 4.2267 | 20.0 | 240 | 5.1182 |
| 4.2401 | 21.0 | 252 | 5.0173 |
| 4.767 | 22.0 | 264 | 4.9806 |
| 4.0932 | 23.0 | 276 | 4.8975 |
| 4.3266 | 24.0 | 288 | 4.8852 |
| 4.0103 | 25.0 | 300 | 4.7698 |
| 4.1829 | 26.0 | 312 | 4.7993 |
| 4.0862 | 27.0 | 324 | 4.7921 |
| 4.1418 | 28.0 | 336 | 4.7469 |
| 4.0668 | 29.0 | 348 | 4.7108 |
| 4.0318 | 30.0 | 360 | 4.6335 |
| 4.0468 | 31.0 | 372 | 4.6761 |
| 3.9454 | 32.0 | 384 | 4.5814 |
| 3.943 | 33.0 | 396 | 4.5624 |
| 3.5406 | 34.0 | 408 | 4.6243 |
| 3.5091 | 35.0 | 420 | 4.5822 |
| 3.5972 | 36.0 | 432 | 4.4551 |
| 3.711 | 37.0 | 444 | 4.4898 |
| 3.7391 | 38.0 | 456 | 4.4472 |
| 3.7883 | 39.0 | 468 | 4.4188 |
| 3.7508 | 40.0 | 480 | 4.3803 |
| 3.422 | 41.0 | 492 | 4.3539 |
| 3.5801 | 42.0 | 504 | 4.3718 |
| 3.3411 | 43.0 | 516 | 4.3635 |
| 3.5347 | 44.0 | 528 | 4.3381 |
| 3.3136 | 45.0 | 540 | 4.2857 |
| 3.6378 | 46.0 | 552 | 4.2428 |
| 3.9194 | 47.0 | 564 | 4.3143 |
| 3.444 | 48.0 | 576 | 4.2403 |
| 3.5414 | 49.0 | 588 | 4.2614 |
| 3.6703 | 50.0 | 600 | 4.2729 |
| 3.5997 | 51.0 | 612 | 4.2104 |
| 3.1202 | 52.0 | 624 | 4.1948 |
| 3.3409 | 53.0 | 636 | 4.2018 |
| 3.4611 | 54.0 | 648 | 4.1726 |
| 3.1643 | 55.0 | 660 | 4.1776 |
| 3.1082 | 56.0 | 672 | 4.1785 |
| 2.9745 | 57.0 | 684 | 4.1374 |
| 3.3937 | 58.0 | 696 | 4.1434 |
| 3.265 | 59.0 | 708 | 4.1356 |
| 3.0267 | 60.0 | 720 | 4.1474 |
| 3.0632 | 61.0 | 732 | 4.1193 |
| 3.3543 | 62.0 | 744 | 4.0760 |
| 3.519 | 63.0 | 756 | 4.1373 |
| 3.2546 | 64.0 | 768 | 4.0591 |
| 3.0835 | 65.0 | 780 | 4.0572 |
| 3.3228 | 66.0 | 792 | 4.0788 |
| 3.3441 | 67.0 | 804 | 4.0489 |
| 2.9186 | 68.0 | 816 | 4.0360 |
| 3.1519 | 69.0 | 828 | 4.0376 |
| 3.5119 | 70.0 | 840 | 4.0159 |
| 3.1155 | 71.0 | 852 | 4.0070 |
| 3.1899 | 72.0 | 864 | 3.9895 |
| 3.0979 | 73.0 | 876 | 3.9936 |
| 3.1709 | 74.0 | 888 | 3.9997 |
| 3.3529 | 75.0 | 900 | 3.9848 |
| 2.7989 | 76.0 | 912 | 3.9760 |
| 3.1918 | 77.0 | 924 | 3.9693 |
| 2.8472 | 78.0 | 936 | 3.9504 |
| 3.3493 | 79.0 | 948 | 3.9520 |
| 3.5098 | 80.0 | 960 | 3.9401 |
| 3.2381 | 81.0 | 972 | 3.9363 |
| 3.1959 | 82.0 | 984 | 3.9292 |
| 3.4514 | 83.0 | 996 | 3.9128 |
| 2.9119 | 84.0 | 1008 | 3.9194 |
| 3.2452 | 85.0 | 1020 | 3.9038 |
| 3.0657 | 86.0 | 1032 | 3.9168 |
| 2.8583 | 87.0 | 1044 | 3.9018 |
| 3.2229 | 88.0 | 1056 | 3.9000 |
| 2.9973 | 89.0 | 1068 | 3.8906 |
| 3.0533 | 90.0 | 1080 | 3.8818 |
| 3.3813 | 91.0 | 1092 | 3.8715 |
| 3.1559 | 92.0 | 1104 | 3.8639 |
| 3.1343 | 93.0 | 1116 | 3.8674 |
| 2.9604 | 94.0 | 1128 | 3.8690 |
| 3.3522 | 95.0 | 1140 | 3.8646 |
| 2.9739 | 96.0 | 1152 | 3.8589 |
| 2.7854 | 97.0 | 1164 | 3.8559 |
| 2.8544 | 98.0 | 1176 | 3.8445 |
| 2.9875 | 99.0 | 1188 | 3.8434 |
| 3.3395 | 100.0 | 1200 | 3.8402 |
| 2.736 | 101.0 | 1212 | 3.8398 |
| 3.0598 | 102.0 | 1224 | 3.8384 |
| 3.003 | 103.0 | 1236 | 3.8376 |
| 3.0566 | 104.0 | 1248 | 3.8386 |
| 3.1727 | 105.0 | 1260 | 3.8281 |
| 2.9811 | 106.0 | 1272 | 3.8331 |
| 2.7108 | 107.0 | 1284 | 3.8224 |
| 2.6579 | 108.0 | 1296 | 3.8236 |
| 3.1319 | 109.0 | 1308 | 3.8197 |
| 3.1115 | 110.0 | 1320 | 3.8216 |
| 3.0955 | 111.0 | 1332 | 3.8181 |
| 2.6928 | 112.0 | 1344 | 3.8188 |
| 2.9943 | 113.0 | 1356 | 3.8147 |
| 3.0923 | 114.0 | 1368 | 3.8154 |
| 3.1913 | 115.0 | 1380 | 3.8156 |
| 2.9444 | 116.0 | 1392 | 3.8146 |
| 3.0491 | 117.0 | 1404 | 3.8141 |
| 2.7357 | 118.0 | 1416 | 3.8148 |
| 3.0744 | 119.0 | 1428 | 3.8148 |
| 3.1122 | 120.0 | 1440 | 3.8148 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
himum/sn6_2s
|
himum
| 2024-04-25T13:58:16Z | 247 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T15:18:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vonjack/Qwen-LLaMAfied-HFTok-7B-Chat
|
vonjack
| 2024-04-25T13:55:56Z | 1,509 | 24 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"qwen",
"llama-2",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-09T08:17:56Z |
---
language:
- en
- zh
tags:
- qwen
- llama
- llama-2
license: apache-2.0
---
[WIP]
Origin repository [JosephusCheung/Qwen-LLaMAfied-7B-Chat](https://huggingface.co/JosephusCheung/Qwen-LLaMAfied-7B-Chat).
This is the LLaMAfied version of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models.
I converted the tokenizer from tiktoken format to huggingface format, so you do not need to allow external codes when loading anymore.
The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.
SPOILOR: Further finetuning is in progress, the current version is a work-in-progress, some knowledge may be biased and illusory due to structural changes. Will be updated very, very sooooooooooon.
PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
CURRENT MMLU: 50.36
Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped slightly (-3.54) due to insufficient realignment.
|
Saifuddin1978/_11_
|
Saifuddin1978
| 2024-04-25T13:53:57Z | 0 | 0 |
fasttext
|
[
"fasttext",
"art",
"text-generation",
"ar",
"dataset:HuggingFaceFW/fineweb",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-04-25T13:47:46Z |
---
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb
language:
- ar
metrics:
- accuracy
library_name: fasttext
pipeline_tag: text-generation
tags:
- art
---
|
Holarissun/dpo_helpfulhelpful_human_gamma5.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
|
Holarissun
| 2024-04-25T13:52:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T13:52:47Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: dpo_helpfulhelpful_human_gamma5.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_helpfulhelpful_human_gamma5.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
i-pj/MLAgent-Pyramid
|
i-pj
| 2024-04-25T13:46:46Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-04-25T13:46:43Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: i-pj/MLAgent-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
STomoya/caformer_s18.st_safebooru_1k
|
STomoya
| 2024-04-25T13:46:40Z | 15 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-04-25T13:46:27Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for caformer_s18.st_safebooru_1k
## Model Details
- **metrics:**
|Precision|Recall|F1-score|
|-|-|-|
|0.7941601067736772|0.5087503998700491|0.5981664346700365|
|
Holarissun/dpo_helpfulhelpful_human_gamma100.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
|
Holarissun
| 2024-04-25T13:40:45Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T13:40:38Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: dpo_helpfulhelpful_human_gamma100.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_helpfulhelpful_human_gamma100.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AlignmentResearch/robust_llm_pythia-410m_mz-130_IMDB_n-its-10-seed-2
|
AlignmentResearch
| 2024-04-25T13:39:25Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m",
"base_model:finetune:EleutherAI/pythia-410m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-25T13:38:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-410m
model-index:
- name: robust_llm_pythia-410m_mz-130_IMDB_n-its-10-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-410m_mz-130_IMDB_n-its-10-seed-2
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Dua020/whisper-large-v3
|
Dua020
| 2024-04-25T13:39:23Z | 76 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-25T11:58:38Z |
---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: None
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 34.53822060441886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2859
- Wer: 34.5382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0818 | 2.4450 | 1000 | 0.2859 | 34.5382 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
AlignmentResearch/robust_llm_pythia-410m_mz-130_IMDB_n-its-10-seed-0
|
AlignmentResearch
| 2024-04-25T13:35:41Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m",
"base_model:finetune:EleutherAI/pythia-410m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-25T13:35:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-410m
model-index:
- name: robust_llm_pythia-410m_mz-130_IMDB_n-its-10-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-410m_mz-130_IMDB_n-its-10-seed-0
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tutuhu/shanshui2
|
tutuhu
| 2024-04-25T13:33:16Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T11:32:06Z |
---
license: other
license_name: open
license_link: LICENSE
---
|
dtorber/roberta-base
|
dtorber
| 2024-04-25T13:33:04Z | 55 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-22T14:03:40Z |
---
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3745
- Icm: -0.0196
- Icmnorm: 0.4901
- Fmeasure: 0.6565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Icm | Icmnorm | Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:--------:|
| 0.6233 | 1.0 | 771 | 0.6371 | -0.0341 | 0.4827 | 0.6416 |
| 0.4026 | 2.0 | 1542 | 0.8523 | -0.1320 | 0.4330 | 0.5968 |
| 0.2684 | 3.0 | 2313 | 1.3745 | -0.0196 | 0.4901 | 0.6565 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
fibleep/pegasus-samsum
|
fibleep
| 2024-04-25T13:29:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-25T13:19:56Z |
---
tags:
- generated_from_trainer
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
|
hus960/PsychoOrca_32x1.1B_MoE_bf16-Q4_K_M-GGUF
|
hus960
| 2024-04-25T13:29:21Z | 14 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:SumayyaAli/accu_qa_dataset",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T13:28:41Z |
---
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
datasets:
- Open-Orca/OpenOrca
- SumayyaAli/accu_qa_dataset
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
pipeline_tag: text-generation
---
# hus960/PsychoOrca_32x1.1B_MoE_bf16-Q4_K_M-GGUF
This model was converted to GGUF format from [`Kquant03/PsychoOrca_32x1.1B_MoE_bf16`](https://huggingface.co/Kquant03/PsychoOrca_32x1.1B_MoE_bf16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Kquant03/PsychoOrca_32x1.1B_MoE_bf16) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hus960/PsychoOrca_32x1.1B_MoE_bf16-Q4_K_M-GGUF --model psychoorca_32x1.1b_moe_bf16.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hus960/PsychoOrca_32x1.1B_MoE_bf16-Q4_K_M-GGUF --model psychoorca_32x1.1b_moe_bf16.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m psychoorca_32x1.1b_moe_bf16.Q4_K_M.gguf -n 128
```
|
tutuhu/shanshui1
|
tutuhu
| 2024-04-25T13:27:48Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T10:39:40Z |
---
license: other
license_name: cnn
license_link: LICENSE
---
|
Lugaborg/Juclyote
|
Lugaborg
| 2024-04-25T13:27:03Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-09T04:00:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Holarissun/dpo_helpfulhelpful_human_gamma0.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
|
Holarissun
| 2024-04-25T13:26:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T13:26:25Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: dpo_helpfulhelpful_human_gamma0.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_helpfulhelpful_human_gamma0.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Stanlito/llam3_lora_model
|
Stanlito
| 2024-04-25T13:24:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T13:24:15Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Stanlito
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YYYYYYibo/zephyr-7b-dpo-qlora
|
YYYYYYibo
| 2024-04-25T13:19:51Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-04-19T09:14:12Z |
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: mistralai/Mistral-7B-v0.1
datasets:
- updated
- original
model-index:
- name: zephyr-7b-dpo-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-qlora
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-qlora](https://huggingface.co/alignment-handbook/zephyr-7b-sft-qlora) on the updated and the original datasets.
It achieves the following results on the evaluation set:
- Loss: 0.5735
- Rewards/chosen: -0.6770
- Rewards/rejected: -1.1070
- Rewards/accuracies: 0.6940
- Rewards/margins: 0.4300
- Logps/rejected: -351.8942
- Logps/chosen: -331.1508
- Logits/rejected: -1.4599
- Logits/chosen: -1.7015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6269 | 0.32 | 100 | 0.6269 | -0.2377 | -0.4431 | 0.6820 | 0.2054 | -285.4985 | -287.2169 | -2.2566 | -2.3666 |
| 0.6332 | 0.64 | 200 | 0.5821 | -0.5909 | -0.9588 | 0.7060 | 0.3679 | -337.0687 | -322.5442 | -1.6871 | -1.8938 |
| 0.5648 | 0.96 | 300 | 0.5735 | -0.6770 | -1.1070 | 0.6940 | 0.4300 | -351.8942 | -331.1508 | -1.4599 | -1.7015 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
Holarissun/dpo_helpfulhelpful_human_gamma1.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
|
Holarissun
| 2024-04-25T13:18:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T13:18:49Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: dpo_helpfulhelpful_human_gamma1.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_helpfulhelpful_human_gamma1.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
vangard703/DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi
|
vangard703
| 2024-04-25T13:18:14Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T03:39:07Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6444
- Rewards/chosen: -2.4793
- Rewards/rejected: -2.9560
- Rewards/accuracies: 0.6667
- Rewards/margins: 0.4767
- Rewards/mix Margin: 0.1749
- Logps/rejected: -481.8095
- Logps/chosen: -453.2426
- Logits/rejected: -1.7012
- Logits/chosen: -1.7287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.1
|
MSLars/de_extr_summ
|
MSLars
| 2024-04-25T13:17:46Z | 0 | 0 |
spacy
|
[
"spacy",
"token-classification",
"de",
"model-index",
"region:us"
] |
token-classification
| 2024-04-25T13:17:26Z |
---
tags:
- spacy
- token-classification
language:
- de
model-index:
- name: de_extr_summ
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9932432432
- name: NER Recall
type: recall
value: 0.9639344262
- name: NER F Score
type: f_score
value: 0.9783693844
---
| Feature | Description |
| --- | --- |
| **Name** | `de_extr_summ` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.7.2,<3.8.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `Agreement`, `Klöser` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 97.84 |
| `ENTS_P` | 99.32 |
| `ENTS_R` | 96.39 |
| `TRANSFORMER_LOSS` | 8692.25 |
| `NER_LOSS` | 485216.62 |
|
cgihlstorf/NEW_finetuned_llama27b32_1_0.0003_alternate_RANDOM_100_pct
|
cgihlstorf
| 2024-04-25T13:15:33Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-04-25T13:14:26Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
LeeZande/Egg1
|
LeeZande
| 2024-04-25T13:15:32Z | 0 | 0 | null |
[
"zh",
"en",
"arxiv:1910.09700",
"license:llama3",
"region:us"
] | null | 2024-04-23T22:19:10Z |
---
license: llama3
language:
- zh
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ostixe360/lp-music-caps
|
Ostixe360
| 2024-04-25T13:14:31Z | 54 | 2 |
transformers
|
[
"transformers",
"safetensors",
"music",
"music-captioning",
"en",
"dataset:seungheondoh/LP-MusicCaps-MSD",
"dataset:seungheondoh/LP-MusicCaps-MC",
"arxiv:2307.16372",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T13:10:39Z |
---
license: mit
datasets:
- seungheondoh/LP-MusicCaps-MSD
- seungheondoh/LP-MusicCaps-MC
language:
- en
metrics:
- bleu
- bertscore
tags:
- music
- music-captioning
---
# LP-MusicCaps-HF
This is the LP-MusicCaps model but loadable by the hf library directly
# Original Model Card
- **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps)
- **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372)
# :sound: LP-MusicCaps: LLM-Based Pseudo Music Captioning
[](https://youtu.be/ezwYVaiC-AM)
This is a implementation of [LP-MusicCaps: LLM-Based Pseudo Music Captioning](#). This project aims to generate captions for music. 1) Tag-to-Caption: Using existing tags, We leverage the power of OpenAI's GPT-3.5 Turbo API to generate high-quality and contextually relevant captions based on music tag. 2) Audio-to-Caption: Using music-audio and pseudo caption pairs, we train a cross-model encoder-decoder model for end-to-end music captioning
> [**LP-MusicCaps: LLM-Based Pseudo Music Captioning**](#)
> SeungHeon Doh, Keunwoo Choi, Jongpil Lee, Juhan Nam
> To appear ISMIR 2023
## TL;DR
<p align = "center">
<img src = "https://i.imgur.com/2LC0nT1.png">
</p>
- **[1.Tag-to-Caption: LLM Captioning](https://github.com/seungheondoh/lp-music-caps/tree/main/lpmc/llm_captioning)**: Generate caption from given tag input.
- **[2.Pretrain Music Captioning Model](https://github.com/seungheondoh/lp-music-caps/tree/main/lpmc/music_captioning)**: Generate pseudo caption from given audio.
- **[3.Transfer Music Captioning Model](https://github.com/seungheondoh/lp-music-caps/tree/main/lpmc/music_captioning/transfer.py)**: Generate human level caption from given audio.
## Open Source Material
- [pre-trained models](https://huggingface.co/seungheondoh/lp-music-caps)
- [music-pseudo caption dataset](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MSD)
- [demo](https://huggingface.co/spaces/seungheondoh/LP-Music-Caps-demo)
are available online for future research. example of dataset in [notebook](https://github.com/seungheondoh/lp-music-caps/blob/main/notebook/Dataset.ipynb)
|
Kelechie/Bevo-Budv1.1
|
Kelechie
| 2024-04-25T13:12:17Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:distilbert/distilgpt2",
"base_model:adapter:distilbert/distilgpt2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T06:59:06Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: distilbert/distilgpt2
datasets:
- generator
model-index:
- name: Bevo-Budv1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bevo-Budv1.1
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.7.0
- Transformers 4.40.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
rizwan-ai/mistral_7b-instruct-guanaco
|
rizwan-ai
| 2024-04-25T13:11:37Z | 0 | 2 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T22:32:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gunghio/xlm-roberta-base-finetuned-panx-ner
|
gunghio
| 2024-04-25T13:07:41Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"it",
"en",
"de",
"fr",
"es",
"multilingual",
"dataset:xtreme",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-29T11:15:55Z |
---
language:
- it
- en
- de
- fr
- es
- multilingual
license:
- mit
datasets:
- xtreme
metrics:
- precision: 0.874
- recall: 0.88
- f1: 0.877
- accuracy: 0.943
inference:
parameters:
aggregation_strategy: first
---
# gunghio/xlm-roberta-base-finetuned-panx-ner
This model was trained starting from xlm-roberta-base on a subset of xtreme dataset.
`xtreme` datasets subsets used are: PAN-X.{lang}. Language used for training/validation are: italian, english, german, french and spanish.
Only 75% of the whole dataset was used.
## Intended uses & limitations
Fine-tuned model can be used for Named Entity Recognition in it, en, de, fr, and es.
## Training and evaluation data
Training dataset: [xtreme](https://huggingface.co/datasets/xtreme)
### Training results
It achieves the following results on the evaluation set:
- Precision: 0.8744154472771157
- Recall: 0.8791424269015351
- F1: 0.8767725659462058
- Accuracy: 0.9432040948504613
Details:
| Label | Precision | Recall | F1-Score | Support |
|---------|-----------|--------|----------|---------|
| PER | 0.922 | 0.908 | 0.915 | 26639 |
| LOC | 0.880 | 0.906 | 0.892 | 37623 |
| ORG | 0.821 | 0.816 | 0.818 | 28045 |
| Overall | 0.874 | 0.879 | 0.877 | 92307 |
## Usage
Set aggregation stragey according to [documentation](https://huggingface.co/docs/transformers/v4.18.0/en/main_classes/pipelines#transformers.TokenClassificationPipeline).
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("gunghio/xlm-roberta-base-finetuned-panx-ner")
model = AutoModelForTokenClassification.from_pretrained("gunghio/xlm-roberta-base-finetuned-panx-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="first")
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
```
|
piegarroni/phi-2-csv-conversion-cense-v6
|
piegarroni
| 2024-04-25T13:06:43Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-04-25T13:06:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
automerger/Experiment26T3qm7-7B
|
automerger
| 2024-04-25T13:06:15Z | 0 | 0 | null |
[
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-19T02:19:44Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
---
# Experiment26T3qm7-7B
Experiment26T3qm7-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: yam-peleg/Experiment26-7B
- model: nlpguy/T3QM7
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment26T3qm7-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Audino/my-awesome-modelv4
|
Audino
| 2024-04-25T13:04:35Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-25T13:03:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LumousInTheWild/image_captioning_1
|
LumousInTheWild
| 2024-04-25T13:03:46Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-04-24T11:23:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ibrahim-haji-abdi/longformer-fake-review-detector
|
ibrahim-haji-abdi
| 2024-04-25T13:03:32Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-17T21:03:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: allenai/longformer-base-4096
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: longformer-fake-review-detector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-fake-review-detector
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2980
- Accuracy: 0.9252
- F1: 0.9155
- Precision: 0.9774
- Recall: 0.8609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 40 | 0.2606 | 0.8816 | 0.8643 | 0.9380 | 0.8013 |
| No log | 2.0 | 80 | 0.5782 | 0.8100 | 0.7469 | 1.0 | 0.5960 |
| No log | 3.0 | 120 | 0.2782 | 0.9097 | 0.8968 | 0.9692 | 0.8344 |
| No log | 4.0 | 160 | 0.2980 | 0.9252 | 0.9155 | 0.9774 | 0.8609 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Mihaj/wav2vec2-large-uralic-voxpopuli-v2-karelian-CodeSwitching
|
Mihaj
| 2024-04-25T13:01:17Z | 20 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-22T11:25:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stablediffusionapi/realistic-vision-v6.0-b1-inpaint-n
|
stablediffusionapi
| 2024-04-25T13:00:04Z | 86 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-04-25T12:58:40Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision-v6.0-b1-inpaint-n"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/realistic-vision-v6.0-b1-inpaint-n)
Model link: [View model](https://modelslab.com/models/realistic-vision-v6.0-b1-inpaint-n)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision-v6.0-b1-inpaint-n",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
HenryCai1129/adapter-toxic2nontoxic-100-50-0.0003
|
HenryCai1129
| 2024-04-25T12:59:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T10:03:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yiyic/llama3-lora-clf-3
|
yiyic
| 2024-04-25T12:58:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"region:us"
] | null | 2024-04-25T12:58:39Z |
---
library_name: peft
base_model: openlm-research/open_llama_3b_v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
AdapterHub/llama2-7b-qlora-openassistant
|
AdapterHub
| 2024-04-25T12:56:20Z | 7 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"llama",
"llama-2",
"text-generation",
"dataset:timdettmers/openassistant-guanaco",
"arxiv:2305.14314",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-04-07T19:36:04Z |
---
tags:
- llama
- adapter-transformers
- llama-2
datasets:
- timdettmers/openassistant-guanaco
license: apache-2.0
pipeline_tag: text-generation
---
# OpenAssistant QLoRA Adapter for Llama-2 7B
QLoRA adapter for the Llama-2 7B (`meta-llama/Llama-2-7b-hf`) model trained for instruction tuning on the [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco/) dataset.
**This adapter was created for usage with the [Adapters](https://github.com/Adapter-Hub/adapters) library.**
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the model and adapter can be loaded and activated like this:
```python
import adapters
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "meta-llama/Llama-2-7b-hf"
adapter_id = "AdapterHub/llama2-7b-qlora-openassistant"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
),
torch_dtype=torch.bfloat16,
)
adapters.init(model)
adapter_name = model.load_adapter(adapter_id, source="hf", set_active=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
### Inference
Inference can be done via standard methods built in to the Transformers library.
We add some helper code to properly prompt the model first:
```python
from transformers import StoppingCriteria
# stop if model starts to generate "### Human:"
class EosListStoppingCriteria(StoppingCriteria):
def __init__(self, eos_sequence = [12968, 29901]):
self.eos_sequence = eos_sequence
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
last_ids = input_ids[:,-len(self.eos_sequence):].tolist()
return self.eos_sequence in last_ids
def prompt_model(model, text: str):
batch = tokenizer(f"### Human: {text} ### Assistant:", return_tensors="pt")
batch = batch.to(model.device)
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, stopping_criteria=[EosListStoppingCriteria()])
# skip prompt when decoding
decoded = tokenizer.decode(output_tokens[0, batch["input_ids"].shape[1]:], skip_special_tokens=True)
return decoded[:-10] if decoded.endswith("### Human:") else decoded
```
Now, to prompt the model:
```python
prompt_model(model, "Please explain NLP in simple terms.")
```
### Weight merging
To decrease inference latency, the LoRA weights can be merged with the base model:
```python
model.merge_adapter(adapter_name)
```
## Architecture & Training
**Training was run with the code in [this notebook](https://github.com/adapter-hub/adapters/blob/main/notebooks/QLoRA_Llama_Finetuning.ipynb)**.
The LoRA architecture closely follows the configuration described in the [QLoRA paper](https://arxiv.org/pdf/2305.14314.pdf):
- `r=64`, `alpha=16`
- LoRA modules added in output, intermediate and all (Q, K, V) self-attention linear layers
The adapter is trained similar to the Guanaco models proposed in the paper:
- Dataset: [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)
- Quantization: 4-bit QLoRA
- Batch size: 16, LR: 2e-4, max steps: 1875
- Sequence length: 512
|
eyeonyou/logs
|
eyeonyou
| 2024-04-25T12:53:52Z | 16 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/codebert-base",
"base_model:finetune:microsoft/codebert-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-21T03:16:27Z |
---
base_model: microsoft/codebert-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: logs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# logs
This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0405
- Accuracy: 0.9950
- Precision: 0.9950
- Recall: 0.9950
- F1 Score: 0.9950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|
| 0.1436 | 1.0 | 907 | 0.0851 | 0.9829 | 0.9829 | 0.9829 | 0.9829 |
| 0.0737 | 2.0 | 1814 | 0.0548 | 0.9915 | 0.9915 | 0.9915 | 0.9915 |
| 0.0216 | 3.0 | 2721 | 0.0469 | 0.9917 | 0.9918 | 0.9917 | 0.9917 |
| 0.0143 | 4.0 | 3628 | 0.0405 | 0.9950 | 0.9950 | 0.9950 | 0.9950 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
racheltong/whisper-small-custom300-1e-5-va2000
|
racheltong
| 2024-04-25T12:52:57Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-25T08:49:25Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-custom300-1e-5-va2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-custom300-1e-5-va2000
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0795
- Wer: 1.1530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 0.0043 | 6.9444 | 1000 | 0.0728 | 1.2498 |
| 0.0003 | 13.8889 | 2000 | 0.0795 | 1.1530 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
rwr20/dqn-SpaceInvadersNoFrameskip-v4_rwr20_2
|
rwr20
| 2024-04-25T12:50:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-25T12:49:24Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 786.50 +/- 255.83
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rwr20 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rwr20 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rwr20
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 2000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
i-pj/ppo-SnowballTarget
|
i-pj
| 2024-04-25T12:49:10Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-04-25T12:49:07Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: i-pj/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
frankie699/output1
|
frankie699
| 2024-04-25T12:47:38Z | 72 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v2-xxlarge",
"base_model:finetune:microsoft/deberta-v2-xxlarge",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-24T17:32:22Z |
---
license: mit
base_model: microsoft/deberta-v2-xxlarge
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: output1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output1
This model is a fine-tuned version of [microsoft/deberta-v2-xxlarge](https://huggingface.co/microsoft/deberta-v2-xxlarge) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7690
- Accuracy: 0.676
- Macro F1: 0.6761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 1.5278 | 0.2286 | 100 | 1.1249 | 0.5146 | 0.4600 |
| 0.9452 | 0.4571 | 200 | 0.8437 | 0.645 | 0.6425 |
| 0.8367 | 0.6857 | 300 | 0.8038 | 0.6477 | 0.6531 |
| 0.8092 | 0.9143 | 400 | 0.7801 | 0.6593 | 0.6611 |
| 0.7679 | 1.1429 | 500 | 0.7868 | 0.6717 | 0.6697 |
| 0.7451 | 1.3714 | 600 | 0.7711 | 0.6647 | 0.6645 |
| 0.7467 | 1.6 | 700 | 0.7646 | 0.6659 | 0.6649 |
| 0.7261 | 1.8286 | 800 | 0.7840 | 0.6649 | 0.6632 |
| 0.7305 | 2.0571 | 900 | 0.7755 | 0.6681 | 0.6707 |
| 0.6742 | 2.2857 | 1000 | 0.7719 | 0.6691 | 0.6707 |
| 0.6728 | 2.5143 | 1100 | 0.7640 | 0.6726 | 0.6726 |
| 0.6691 | 2.7429 | 1200 | 0.7759 | 0.6761 | 0.6783 |
| 0.677 | 2.9714 | 1300 | 0.7690 | 0.676 | 0.6761 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2
- Datasets 2.19.0
- Tokenizers 0.19.1
|
selvaa/segformer-b1-finetuned-cityscapes-1024-1024-full-ds
|
selvaa
| 2024-04-25T12:44:10Z | 35 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/segformer-b1-finetuned-cityscapes-1024-1024",
"base_model:finetune:nvidia/segformer-b1-finetuned-cityscapes-1024-1024",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T10:09:07Z |
---
license: other
base_model: nvidia/segformer-b1-finetuned-cityscapes-1024-1024
tags:
- generated_from_trainer
model-index:
- name: segformer-b1-finetuned-cityscapes-1024-1024-full-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b1-finetuned-cityscapes-1024-1024-full-ds
This model is a fine-tuned version of [nvidia/segformer-b1-finetuned-cityscapes-1024-1024](https://huggingface.co/nvidia/segformer-b1-finetuned-cityscapes-1024-1024) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0506
- Mean Iou: 0.9137
- Mean Accuracy: 0.9561
- Overall Accuracy: 0.9831
- Accuracy Default: 1e-06
- Accuracy Pipe: 0.9020
- Accuracy Floor: 0.9742
- Accuracy Background: 0.9920
- Iou Default: 1e-06
- Iou Pipe: 0.7996
- Iou Floor: 0.9590
- Iou Background: 0.9824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Default | Accuracy Pipe | Accuracy Floor | Accuracy Background | Iou Default | Iou Pipe | Iou Floor | Iou Background |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:-------------:|:--------------:|:-------------------:|:-----------:|:--------:|:---------:|:--------------:|
| 0.2488 | 1.0 | 39 | 0.1108 | 0.8539 | 0.9260 | 0.9669 | 1e-06 | 0.8345 | 0.9681 | 0.9754 | 1e-06 | 0.6794 | 0.9185 | 0.9639 |
| 0.0768 | 2.0 | 78 | 0.0659 | 0.8845 | 0.9254 | 0.9772 | 1e-06 | 0.8239 | 0.9573 | 0.9951 | 1e-06 | 0.7287 | 0.9506 | 0.9741 |
| 0.0663 | 3.0 | 117 | 0.0588 | 0.8918 | 0.9320 | 0.9793 | 1e-06 | 0.8343 | 0.9687 | 0.9931 | 1e-06 | 0.7439 | 0.9540 | 0.9776 |
| 0.0562 | 4.0 | 156 | 0.0534 | 0.9000 | 0.9592 | 0.9806 | 1e-06 | 0.9237 | 0.9627 | 0.9912 | 1e-06 | 0.7654 | 0.9539 | 0.9808 |
| 0.0509 | 5.0 | 195 | 0.0512 | 0.9063 | 0.9492 | 0.9817 | 1e-06 | 0.8876 | 0.9660 | 0.9940 | 1e-06 | 0.7813 | 0.9569 | 0.9806 |
| 0.0456 | 6.0 | 234 | 0.0498 | 0.9058 | 0.9550 | 0.9819 | 1e-06 | 0.9037 | 0.9692 | 0.9920 | 1e-06 | 0.7783 | 0.9574 | 0.9817 |
| 0.0425 | 7.0 | 273 | 0.0493 | 0.9045 | 0.9515 | 0.9817 | 1e-06 | 0.8918 | 0.9709 | 0.9918 | 1e-06 | 0.7748 | 0.9576 | 0.9810 |
| 0.0402 | 8.0 | 312 | 0.0503 | 0.9074 | 0.9456 | 0.9821 | 1e-06 | 0.8722 | 0.9706 | 0.9939 | 1e-06 | 0.7833 | 0.9581 | 0.9810 |
| 0.0382 | 9.0 | 351 | 0.0501 | 0.9108 | 0.9471 | 0.9825 | 1e-06 | 0.8766 | 0.9702 | 0.9943 | 1e-06 | 0.7930 | 0.9581 | 0.9812 |
| 0.0402 | 10.0 | 390 | 0.0474 | 0.9122 | 0.9520 | 0.9830 | 1e-06 | 0.8907 | 0.9720 | 0.9933 | 1e-06 | 0.7959 | 0.9583 | 0.9824 |
| 0.0367 | 11.0 | 429 | 0.0497 | 0.9089 | 0.9571 | 0.9824 | 1e-06 | 0.9088 | 0.9705 | 0.9919 | 1e-06 | 0.7863 | 0.9585 | 0.9820 |
| 0.0355 | 12.0 | 468 | 0.0445 | 0.9191 | 0.9618 | 0.9843 | 1e-06 | 0.9202 | 0.9719 | 0.9933 | 1e-06 | 0.8132 | 0.9597 | 0.9844 |
| 0.033 | 13.0 | 507 | 0.0494 | 0.9114 | 0.9543 | 0.9828 | 1e-06 | 0.8965 | 0.9746 | 0.9918 | 1e-06 | 0.7943 | 0.9571 | 0.9827 |
| 0.0319 | 14.0 | 546 | 0.0471 | 0.9163 | 0.9542 | 0.9837 | 1e-06 | 0.8953 | 0.9740 | 0.9934 | 1e-06 | 0.8068 | 0.9585 | 0.9835 |
| 0.0304 | 15.0 | 585 | 0.0476 | 0.9167 | 0.9527 | 0.9839 | 1e-06 | 0.8911 | 0.9726 | 0.9944 | 1e-06 | 0.8070 | 0.9598 | 0.9834 |
| 0.0304 | 16.0 | 624 | 0.0492 | 0.9151 | 0.9498 | 0.9835 | 1e-06 | 0.8812 | 0.9744 | 0.9939 | 1e-06 | 0.8036 | 0.9585 | 0.9832 |
| 0.0297 | 17.0 | 663 | 0.0504 | 0.9147 | 0.9549 | 0.9834 | 1e-06 | 0.9003 | 0.9705 | 0.9939 | 1e-06 | 0.8023 | 0.9587 | 0.9830 |
| 0.03 | 18.0 | 702 | 0.0504 | 0.9123 | 0.9584 | 0.9830 | 1e-06 | 0.9103 | 0.9732 | 0.9917 | 1e-06 | 0.7953 | 0.9588 | 0.9828 |
| 0.0294 | 19.0 | 741 | 0.0483 | 0.9162 | 0.9553 | 0.9839 | 1e-06 | 0.8980 | 0.9749 | 0.9931 | 1e-06 | 0.8054 | 0.9596 | 0.9838 |
| 0.0295 | 20.0 | 780 | 0.0506 | 0.9137 | 0.9561 | 0.9831 | 1e-06 | 0.9020 | 0.9742 | 0.9920 | 1e-06 | 0.7996 | 0.9590 | 0.9824 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
lekhapinninti/llama-2-7b-enhanced-5epoch
|
lekhapinninti
| 2024-04-25T12:42:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-04-25T12:42:23Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
mergekit-community/mergekit-slerp-dclolyo
|
mergekit-community
| 2024-04-25T12:40:42Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"base_model:beomi/gemma-ko-7b",
"base_model:merge:beomi/gemma-ko-7b",
"base_model:unsloth/gemma-7b",
"base_model:merge:unsloth/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T12:32:25Z |
---
base_model:
- beomi/gemma-ko-7b
- unsloth/gemma-7b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [beomi/gemma-ko-7b](https://huggingface.co/beomi/gemma-ko-7b)
* [unsloth/gemma-7b](https://huggingface.co/unsloth/gemma-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: unsloth/gemma-7b
layer_range:
- 0
- 28
- model: beomi/gemma-ko-7b
layer_range:
- 0
- 28
merge_method: slerp
base_model: unsloth/gemma-7b
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
stvhuang/rcr-run-5pqr6lwp-90396-master-0_20240402T105012-ep33
|
stvhuang
| 2024-04-25T12:39:41Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-04-25T12:38:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DileepPatruni/CarsImageTraining
|
DileepPatruni
| 2024-04-25T12:39:25Z | 6 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-04-25T06:42:31Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: image of a car travelling on a bridge
parameters:
negative_prompt: NA
output:
url: images/eamBooth_output_image.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: cars, sports car, supra, toyota
---
# Cars Image Training
<Gallery />
## Trigger words
You should use `cars` to trigger the image generation.
You should use `sports car` to trigger the image generation.
You should use `supra` to trigger the image generation.
You should use `toyota` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/DileepPatruni/CarsImageTraining/tree/main) them in the Files & versions tab.
|
vectorventures/Llama-3-8b-64k-PoSE-Q6_K-GGUF
|
vectorventures
| 2024-04-25T12:36:03Z | 0 | 0 | null |
[
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T12:35:45Z |
---
language:
- en
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
---
# vectorventures/Llama-3-8b-64k-PoSE-Q6_K-GGUF
This model was converted to GGUF format from [`winglian/Llama-3-8b-64k-PoSE`](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo vectorventures/Llama-3-8b-64k-PoSE-Q6_K-GGUF --model llama-3-8b-64k-pose.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo vectorventures/Llama-3-8b-64k-PoSE-Q6_K-GGUF --model llama-3-8b-64k-pose.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-64k-pose.Q6_K.gguf -n 128
```
|
rdharmal1/detr-finetuned-sku100k-v2
|
rdharmal1
| 2024-04-25T12:35:46Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T12:35:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nizarh1999/final_classification
|
nizarh1999
| 2024-04-25T12:32:58Z | 48 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:yhavinga/t5-small-24L-ccmatrix-multi",
"base_model:finetune:yhavinga/t5-small-24L-ccmatrix-multi",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-25T09:35:38Z |
---
license: apache-2.0
base_model: yhavinga/t5-small-24L-ccmatrix-multi
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: final_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_classification
This model is a fine-tuned version of [yhavinga/t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0945
- F1: {'f1': 0.9405940594059407}
- Precision: {'precision': 0.9134615384615384}
- Recall: {'recall': 0.9693877551020408}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------:|:---------------------------------:|:------------------------------:|
| No log | 1.0 | 110 | 0.2338 | {'f1': 0.6845637583892618} | {'precision': 1.0} | {'recall': 0.5204081632653061} |
| No log | 2.0 | 220 | 0.0828 | {'f1': 0.9387755102040817} | {'precision': 0.9387755102040817} | {'recall': 0.9387755102040817} |
| No log | 3.0 | 330 | 0.0891 | {'f1': 0.9359605911330049} | {'precision': 0.9047619047619048} | {'recall': 0.9693877551020408} |
| No log | 4.0 | 440 | 0.0744 | {'f1': 0.95} | {'precision': 0.9313725490196079} | {'recall': 0.9693877551020408} |
| 0.1529 | 5.0 | 550 | 0.1012 | {'f1': 0.9405940594059407} | {'precision': 0.9134615384615384} | {'recall': 0.9693877551020408} |
| 0.1529 | 6.0 | 660 | 0.0945 | {'f1': 0.9405940594059407} | {'precision': 0.9134615384615384} | {'recall': 0.9693877551020408} |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
myrulezzzz/llama38b_alpaca
|
myrulezzzz
| 2024-04-25T12:25:56Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T12:23:26Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** myrulezzzz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nvasko/a2c-PandaReachDense-v3
|
nvasko
| 2024-04-25T12:21:19Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-25T12:17:41Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.16 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hus960/DownSide-2x7B-Toxic-TOM-RP-TruthyDPO-Q4_K_M-GGUF
|
hus960
| 2024-04-25T12:20:59Z | 1 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T12:20:27Z |
---
tags:
- llama-cpp
- gguf-my-repo
---
# hus960/DownSide-2x7B-Toxic-TOM-RP-TruthyDPO-Q4_K_M-GGUF
This model was converted to GGUF format from [`Undi95/DownSide-2x7B-Toxic-TOM-RP-TruthyDPO`](https://huggingface.co/Undi95/DownSide-2x7B-Toxic-TOM-RP-TruthyDPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Undi95/DownSide-2x7B-Toxic-TOM-RP-TruthyDPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo hus960/DownSide-2x7B-Toxic-TOM-RP-TruthyDPO-Q4_K_M-GGUF --model downside-2x7b-toxic-tom-rp-truthydpo.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo hus960/DownSide-2x7B-Toxic-TOM-RP-TruthyDPO-Q4_K_M-GGUF --model downside-2x7b-toxic-tom-rp-truthydpo.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m downside-2x7b-toxic-tom-rp-truthydpo.Q4_K_M.gguf -n 128
```
|
ansumanpandey/sql_generation_using_llama3
|
ansumanpandey
| 2024-04-25T12:18:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T12:18:00Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** ansumanpandey
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Holarissun/dpo_harmlessharmless_human_gamma30.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
|
Holarissun
| 2024-04-25T12:15:31Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T12:15:29Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: dpo_harmlessharmless_human_gamma30.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_harmlessharmless_human_gamma30.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ThuyNT/CS505_COQE_viT5_train_Instruction0_SAPOL_v2_h1
|
ThuyNT
| 2024-04-25T12:14:30Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-25T11:23:34Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_train_Instruction0_SAPOL_v2_h1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_SAPOL_v2_h1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
lemon-mint/gemma-2b-translation-v0.131
|
lemon-mint
| 2024-04-25T12:14:05Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"pytorch",
"instruct",
"finetune",
"translation",
"conversational",
"ko",
"dataset:traintogpb/aihub-flores-koen-integrated-sparta-30k",
"base_model:google/gemma-1.1-2b-it",
"base_model:finetune:google/gemma-1.1-2b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T11:49:37Z |
---
library_name: transformers
language:
- ko
license: gemma
tags:
- gemma
- pytorch
- instruct
- finetune
- translation
widget:
- messages:
- role: user
content: "Translate into Korean:Hamsters don't eat cats."
base_model: google/gemma-1.1-2b-it
datasets:
- traintogpb/aihub-flores-koen-integrated-sparta-30k
pipeline_tag: text-generation
---
# Gemma 2B Translation v0.131
- Eval Loss: `0.99568`
- Train Loss: `0.88993`
- lr: `6e-05`
- optimizer: adamw
- lr_scheduler_type: cosine
## Prompt Template
```
<bos><start_of_turn>user
Translate into Korean:Hamsters don't eat cats.<end_of_turn>
<start_of_turn>model
햄스터는 고양이를 먹지 않습니다.<eos>
```
```
<bos><start_of_turn>user
Translate into English:햄스터는 고양이를 먹지 않습니다.<end_of_turn>
<start_of_turn>model
Hamsters do not eat cats.<eos>
```
## Model Description
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it)
|
Holarissun/dpo_harmlessharmless_human_gamma1.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
|
Holarissun
| 2024-04-25T12:14:04Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T12:14:00Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: dpo_harmlessharmless_human_gamma1.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_harmlessharmless_human_gamma1.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
purpleor/autotrain-fczdv-zo09d
|
purpleor
| 2024-04-25T12:11:53Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"autotrain",
"dataset:autotrain-fczdv-zo09d/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-25T09:10:06Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain-fczdv-zo09d/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.3367588222026825
f1: 0.9257166388323306
precision: 0.886979395002192
recall: 0.9679919621070762
auc: 0.9579120153789685
accuracy: 0.92216602344368
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.