modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
adamquintero/bert-finetuned-squad
|
adamquintero
| 2025-02-04T05:09:35Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-02-04T03:10:19Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mlfoundations-dev/llama3-1_8b_r1_annotated_math
|
mlfoundations-dev
| 2025-02-04T05:09:02Z | 3,538 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T21:10:41Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-1_8b_r1_annotated_math
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-1_8b_r1_annotated_math
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/r1_annotated_math dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_L-GGUF
|
Triangle104
| 2025-02-04T05:08:51Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"base_model:quantized:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T05:07:29Z |
---
base_model: nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_L-GGUF
This model was converted to GGUF format from [`nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B`](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_L-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_L-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_L-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_L-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q3_k_l.gguf -c 2048
```
|
LHRuig/adamdrivr
|
LHRuig
| 2025-02-04T05:07:28Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:07:25Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: adamdrivr
---
# adamdrivr
<Gallery />
## Model description
adamdrivr lora
## Trigger words
You should use `adamdrivr` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/adamdrivr/tree/main) them in the Files & versions tab.
|
LHRuig/adamsandrs
|
LHRuig
| 2025-02-04T05:06:50Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:06:46Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: adamsandrs
---
# adamsandrs
<Gallery />
## Model description
adamsandrs lora
## Trigger words
You should use `adamsandrs` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/adamsandrs/tree/main) them in the Files & versions tab.
|
Dang-gu/pokemon
|
Dang-gu
| 2025-02-04T05:04:53Z | 24 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T05:02:08Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Dang-gu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Best000/6989ad2c-826d-4fce-80af-22562878673b
|
Best000
| 2025-02-04T05:04:48Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-02-04T04:52:17Z |
---
library_name: peft
license: cc-by-nc-4.0
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6989ad2c-826d-4fce-80af-22562878673b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 6989ad2c-826d-4fce-80af-22562878673b
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/aaronelordsx
|
LHRuig
| 2025-02-04T05:04:17Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:03:48Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aaronelordsx
---
# aaronelordsx
<Gallery />
## Model description
aaronelordsx lora
## Trigger words
You should use `aaronelordsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aaronelordsx/tree/main) them in the Files & versions tab.
|
LHRuig/aaronsynth
|
LHRuig
| 2025-02-04T05:02:35Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:02:21Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aaronsynth
---
# aaronsynth
<Gallery />
## Model description
aaronsynth lora
## Trigger words
You should use `aaronsynth` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aaronsynth/tree/main) them in the Files & versions tab.
|
Tanvi12sharma/distilbert-finetuned-imdb
|
Tanvi12sharma
| 2025-02-04T05:01:36Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T05:01:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LHRuig/aaronelord
|
LHRuig
| 2025-02-04T05:01:29Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T05:01:25Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: aaronelord
---
# aaronelord
<Gallery />
## Model description
aaronelord lora
## Trigger words
You should use `aaronelord` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aaronelord/tree/main) them in the Files & versions tab.
|
llm-jp/llm-jp-3-13b-instruct3
|
llm-jp
| 2025-02-04T04:59:09Z | 697 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-27T07:45:15Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-13b-instruct3
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-13b-instruct3** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-13b-instruct3")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-13b-instruct3", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning and further aligned it with Direct Preference Optimization.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
#### Direct Preference Optimization
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[aya-ja-evol-inst](https://huggingface.co/datasets/llm-jp/aya-ja-evol-inst) | A synthetic preference dataset focusing on LLMs' helpfulness. |
| |[ac-self-inst](https://huggingface.co/datasets/llm-jp/ac-self-inst)| A synthetic preference dataset focusing on LLMs' safety. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
llm-jp/llm-jp-3-13b-instruct2
|
llm-jp
| 2025-02-04T04:58:56Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-27T07:27:28Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-13b-instruct2
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-13b-instruct2** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-13b-instruct2")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-13b-instruct2", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
llm-jp/llm-jp-3-7.2b-instruct3
|
llm-jp
| 2025-02-04T04:58:45Z | 95 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-31T01:29:06Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-7.2b-instruct3
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-7.2b-instruct3** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-7.2b-instruct3")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-7.2b-instruct3", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning and further aligned it with Direct Preference Optimization.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
#### Direct Preference Optimization
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[aya-ja-evol-inst](https://huggingface.co/datasets/llm-jp/aya-ja-evol-inst) | A synthetic preference dataset focusing on LLMs' helpfulness. |
| |[ac-self-inst](https://huggingface.co/datasets/llm-jp/ac-self-inst)| A synthetic preference dataset focusing on LLMs' safety. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
llm-jp/llm-jp-3-7.2b-instruct2
|
llm-jp
| 2025-02-04T04:58:35Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-27T06:55:41Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-7.2b-instruct2
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-7.2b-instruct2** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-7.2b-instruct2")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-7.2b-instruct2", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
llm-jp/llm-jp-3-3.7b-instruct2
|
llm-jp
| 2025-02-04T04:58:12Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-27T06:36:50Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-3.7bm-instruct2
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-3.7b-instruct2** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-3.7b-instruct2")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-3.7b-instruct2", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
lesso/83884c9b-4844-4eb1-bd9b-00fe928a4eb0
|
lesso
| 2025-02-04T04:58:07Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:16:53Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 83884c9b-4844-4eb1-bd9b-00fe928a4eb0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 5210a65ef5106af6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5210a65ef5106af6_train_data.json
type:
field_instruction: caption
field_output: matching_score
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/83884c9b-4844-4eb1-bd9b-00fe928a4eb0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god01/5210a65ef5106af6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
wandb_project: ab-god01
wandb_run: your_name
wandb_runid: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 83884c9b-4844-4eb1-bd9b-00fe928a4eb0
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0539 | 0.0000 | 1 | 2.1099 |
| 0.5417 | 0.0005 | 50 | 0.5155 |
| 0.4613 | 0.0010 | 100 | 0.4088 |
| 0.3902 | 0.0016 | 150 | 0.3863 |
| 0.4147 | 0.0021 | 200 | 0.3761 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
llm-jp/llm-jp-3-1.8b-instruct3
|
llm-jp
| 2025-02-04T04:57:48Z | 801 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-31T01:21:43Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-1.8b-instruct3
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-1.8b-instruct3** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-1.8b-instruct3")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-1.8b-instruct3", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning and further aligned it with Direct Preference Optimization.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
#### Direct Preference Optimization
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[aya-ja-evol-inst](https://huggingface.co/datasets/llm-jp/aya-ja-evol-inst) | A synthetic preference dataset focusing on LLMs' helpfulness. |
| |[ac-self-inst](https://huggingface.co/datasets/llm-jp/ac-self-inst)| A synthetic preference dataset focusing on LLMs' safety. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
llm-jp/llm-jp-3-980m-instruct3
|
llm-jp
| 2025-02-04T04:57:22Z | 977 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-31T01:19:58Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-980m-instruct3
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-980m-instruct3** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-980m-instruct3")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-980m-instruct3", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning and further aligned it with Direct Preference Optimization.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
#### Direct Preference Optimization
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[aya-ja-evol-inst](https://huggingface.co/datasets/llm-jp/aya-ja-evol-inst) | A synthetic preference dataset focusing on LLMs' helpfulness. |
| |[ac-self-inst](https://huggingface.co/datasets/llm-jp/ac-self-inst)| A synthetic preference dataset focusing on LLMs' safety. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
LHRuig/henrycavillksx
|
LHRuig
| 2025-02-04T04:57:14Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T04:57:07Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: henrycavillksx
---
# henrycavillksx
<Gallery />
## Model description
henrycavillksx lora
## Trigger words
You should use `henrycavillksx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/henrycavillksx/tree/main) them in the Files & versions tab.
|
Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_M-GGUF
|
Triangle104
| 2025-02-04T04:57:10Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"base_model:quantized:nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T04:55:55Z |
---
base_model: nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B`](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Rombos-EVAGutenberg-TIES-Qwen2.5-32B-Q3_K_M-GGUF --hf-file rombos-evagutenberg-ties-qwen2.5-32b-q3_k_m.gguf -c 2048
```
|
Kwakrhkr/KBO_chatbot_example
|
Kwakrhkr
| 2025-02-04T04:57:08Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T04:38:11Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Kwakrhkr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
llm-jp/llm-jp-3-980m
|
llm-jp
| 2025-02-04T04:56:59Z | 143 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-27T04:37:47Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-980m
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-980m** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-980m")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-980m", device_map="auto", torch_dtype=torch.bfloat16)
text = "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"
tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
llm-jp/llm-jp-3-440m-instruct3
|
llm-jp
| 2025-02-04T04:56:45Z | 1,061 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-31T01:18:21Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-440m-instruct3
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-440m-instruct3** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-440m-instruct3")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-440m-instruct3", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning and further aligned it with Direct Preference Optimization.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
#### Direct Preference Optimization
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[aya-ja-evol-inst](https://huggingface.co/datasets/llm-jp/aya-ja-evol-inst) | A synthetic preference dataset focusing on LLMs' helpfulness. |
| |[ac-self-inst](https://huggingface.co/datasets/llm-jp/ac-self-inst)| A synthetic preference dataset focusing on LLMs' safety. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
abenius/9ed5f9ec-39e9-4896-bd6b-5db337d9f573
|
abenius
| 2025-02-04T04:56:16Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T04:30:03Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9ed5f9ec-39e9-4896-bd6b-5db337d9f573
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ceac57436127cc6c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ceac57436127cc6c_train_data.json
type:
field_input: ''
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: abenius/9ed5f9ec-39e9-4896-bd6b-5db337d9f573
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ceac57436127cc6c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: cc891c4e-9b2c-4c32-93f8-b418eb54f13f
wandb_project: Gradients-On-12
wandb_run: your_name
wandb_runid: cc891c4e-9b2c-4c32-93f8-b418eb54f13f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 9ed5f9ec-39e9-4896-bd6b-5db337d9f573
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6954 | 0.6015 | 200 | 1.8017 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cgus/HuatuoGPT-o1-7B-exl2
|
cgus
| 2025-02-04T04:55:55Z | 11 | 0 | null |
[
"qwen2",
"medical",
"text-generation",
"conversational",
"en",
"zh",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"dataset:FreedomIntelligence/medical-o1-verifiable-problem",
"arxiv:2412.18925",
"base_model:FreedomIntelligence/HuatuoGPT-o1-7B",
"base_model:quantized:FreedomIntelligence/HuatuoGPT-o1-7B",
"license:apache-2.0",
"4-bit",
"exl2",
"region:us"
] |
text-generation
| 2025-02-04T04:04:44Z |
---
license: apache-2.0
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
- FreedomIntelligence/medical-o1-verifiable-problem
language:
- en
- zh
base_model:
- FreedomIntelligence/HuatuoGPT-o1-7B
pipeline_tag: text-generation
tags:
- medical
---
# HuatuoGPT-o1-7B-exl2
Original model: [HuatuoGPT-o1-7B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-7B) made by [FreedomIntelligence](https://huggingface.co/FreedomIntelligence)
Based on: [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) by [Qwen](https://huggingface.co/Qwen)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/HuatuoGPT-o1-7B-exl2/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/HuatuoGPT-o1-7B-exl2/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/HuatuoGPT-o1-7B-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/HuatuoGPT-o1-7B-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/HuatuoGPT-o1-7B-exl2/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.2.7 with default dataset.
Exl2 quants require Nvidia RTX on Windows or Nvidia RTX/AMD ROCm on Linux.
Model has to fully fit GPU as RAM offloading isn't supported natively.
It can be used with apps such as TabbyAPI, Text-Generation-WebUI, LoLLMs and others.
# Original model card
<div align="center">
<h1>
HuatuoGPT-o1-7B
</h1>
</div>
<div align="center">
<a href="https://github.com/FreedomIntelligence/HuatuoGPT-o1" target="_blank">GitHub</a> | <a href="https://arxiv.org/pdf/2412.18925" target="_blank">Paper</a>
</div>
# <span>Introduction</span>
**HuatuoGPT-o1** is a medical LLM designed for advanced medical reasoning. It generates a complex thought process, reflecting and refining its reasoning, before providing a final response.
For more information, visit our GitHub repository:
[https://github.com/FreedomIntelligence/HuatuoGPT-o1](https://github.com/FreedomIntelligence/HuatuoGPT-o1).
# <span>Model Info</span>
| | Backbone | Supported Languages | Link |
| -------------------- | ------------ | ----- | --------------------------------------------------------------------- |
| **HuatuoGPT-o1-8B** | LLaMA-3.1-8B | English | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-8B) |
| **HuatuoGPT-o1-70B** | LLaMA-3.1-70B | English | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-70B) |
| **HuatuoGPT-o1-7B** | Qwen2.5-7B | English & Chinese | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-7B) |
| **HuatuoGPT-o1-72B** | Qwen2.5-72B | English & Chinese | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-72B) |
# <span>Usage</span>
You can use HuatuoGPT-o1-7B in the same way as `Qwen2.5-7B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B",torch_dtype="auto",device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B")
input_text = "How to stop a cough?"
messages = [{"role": "user", "content": input_text}]
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True
), return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
HuatuoGPT-o1 adopts a *thinks-before-it-answers* approach, with outputs formatted as:
```
## Thinking
[Reasoning process]
## Final Response
[Output]
```
# <span>π Citation</span>
```
@misc{chen2024huatuogpto1medicalcomplexreasoning,
title={HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs},
author={Junying Chen and Zhenyang Cai and Ke Ji and Xidong Wang and Wanlong Liu and Rongsheng Wang and Jianye Hou and Benyou Wang},
year={2024},
eprint={2412.18925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.18925},
}
```
|
LHRuig/tanukipalm
|
LHRuig
| 2025-02-04T04:55:50Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T04:55:46Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: tanukipalm
---
# tanukipalm
<Gallery />
## Model description
tanukipalm lora
## Trigger words
You should use `tanukipalm` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/tanukipalm/tree/main) them in the Files & versions tab.
|
llm-jp/llm-jp-3-150m-instruct3
|
llm-jp
| 2025-02-04T04:55:04Z | 1,329 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-31T01:15:34Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-150m-instruct3
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-150m-instruct3** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-150m-instruct3")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-150m-instruct3", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning and further aligned it with Direct Preference Optimization.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
#### Direct Preference Optimization
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[aya-ja-evol-inst](https://huggingface.co/datasets/llm-jp/aya-ja-evol-inst) | A synthetic preference dataset focusing on LLMs' helpfulness. |
| |[ac-self-inst](https://huggingface.co/datasets/llm-jp/ac-self-inst)| A synthetic preference dataset focusing on LLMs' safety. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
LHRuig/itmansx
|
LHRuig
| 2025-02-04T04:55:02Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T04:54:51Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: itmansx
---
# itmansx
<Gallery />
## Model description
itmansx lora
## Trigger words
You should use `itmansx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/itmansx/tree/main) them in the Files & versions tab.
|
llm-jp/llm-jp-3-150m-instruct2
|
llm-jp
| 2025-02-04T04:54:50Z | 189 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-01-27T06:05:44Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# llm-jp-3-150m-instruct2
LLM-jp-3 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
This repository provides **llm-jp-3-150m-instruct2** model.
For an overview of the LLM-jp-3 models across different parameter sizes, please refer to:
- [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa)
- [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-150m-instruct2")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-150m-instruct2", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "δ»₯δΈγ―γγΏγΉγ―γθͺ¬ζγγζη€Ίγ§γγθ¦ζ±γι©εγ«ζΊγγεΏηγζΈγγͺγγγ"},
{"role": "user", "content": "θͺηΆθ¨θͺε¦ηγ¨γ―δ½γ"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 2.1T tokens
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|150M|12|512|8|4096|101,874,688|50,344,448|
|440M|16|1024|8|4096|203,749,376|243,303,424|
|980M|20|1536|8|4096|305,624,064|684,258,816|
|1.8b|24|2048|16|4096|407,498,752|1,459,718,144|
|3.7b|28|3072|24|4096|611,248,128|3,171,068,928|
|7.2b|32|4096|32|4096|814,997,504|6,476,271,616|
|13b|40|5120|40|4096|1,018,746,880|12,688,184,320|
|172b|96|12288|96|4096|2,444,992,512|169,947,181,056|
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
|English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
|Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
|Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
|Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
### Post-training
We have fine-tuned the pre-trained checkpoint with supervised fine-tuning.
#### Supervised Fine-tuning
The datasets used for supervised fine-tuning are as follows:
| Language | Dataset | Description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset. |
| |[AnswerCarefully (ver2.0)](https://huggingface.co/datasets/llm-jp/AnswerCarefully)| A manually constructed instruction dataset focusing on LLMs' safety. |
| |ichikara-instruction-format| A small subset of the ichikara-instruction dataset, edited with some constraints on the output format. |
| |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
| |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
| |[wizardlm8x22b-logical-math-coding-sft-ja](https://huggingface.co/datasets/llm-jp/wizardlm8x22b-logical-math-coding-sft-ja)| A synthetic instruction dataset. |
| |[magpie-sft-v1.0](https://huggingface.co/datasets/llm-jp/magpie-sft-v1.0)| A synthetic instruction dataset we created. |
|English|[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
| |[FLAN](https://huggingface.co/datasets/llm-jp/FLAN/blob/main/README.md) | - |
|Japanese & English|[Synthetic-JP-EN-Coding-Dataset](https://huggingface.co/datasets/llm-jp/Synthetic-JP-EN-Coding-Dataset)| A synthetic instruction dataset. |
## Evaluation
Detailed evaluation results are reported in this [blog](https://llm-jp.nii.ac.jp/blog/2025/02/05/instruct3.html).
## Risks and Limitations
The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru and Takashi Kodama.
|
LHRuig/auronplay
|
LHRuig
| 2025-02-04T04:54:26Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T04:54:05Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: auronplay
---
# auronplay
<Gallery />
## Model description
auronplay lora
## Trigger words
You should use `auronplay` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/auronplay/tree/main) them in the Files & versions tab.
|
LHRuig/sunkrasuang
|
LHRuig
| 2025-02-04T04:52:03Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T04:51:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: sunkrasuang
---
# sunkrasuang
<Gallery />
## Model description
sunkrasuang lora
## Trigger words
You should use `sunkrasuang` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/sunkrasuang/tree/main) them in the Files & versions tab.
|
botenius/610336ee-bb46-4371-b13d-d7d6315e3454
|
botenius
| 2025-02-04T04:49:16Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T03:24:06Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 610336ee-bb46-4371-b13d-d7d6315e3454
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5210a65ef5106af6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5210a65ef5106af6_train_data.json
type:
field_instruction: caption
field_output: matching_score
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: botenius/610336ee-bb46-4371-b13d-d7d6315e3454
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5210a65ef5106af6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
wandb_project: Gradients-On-13
wandb_run: your_name
wandb_runid: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 610336ee-bb46-4371-b13d-d7d6315e3454
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4471 | 0.0021 | 200 | 0.4489 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LHRuig/roycintasx
|
LHRuig
| 2025-02-04T04:47:28Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T04:46:28Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: roycintasx
---
# roycintasx
<Gallery />
## Model description
roycintasx lora
## Trigger words
You should use `roycintasx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/roycintasx/tree/main) them in the Files & versions tab.
|
Tanvi12sharma/results
|
Tanvi12sharma
| 2025-02-04T04:47:06Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T04:46:46Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6079
- Accuracy: 0.8111
- F1: 0.8083
- Precision: 0.8226
- Recall: 0.8111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6858 | 1.0 | 27 | 0.6618 | 0.7667 | 0.7618 | 0.7813 | 0.7667 |
| 0.6287 | 2.0 | 54 | 0.6079 | 0.8111 | 0.8083 | 0.8226 | 0.8111 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
LHRuig/justintimbrlke
|
LHRuig
| 2025-02-04T04:46:01Z | 11 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-04T04:45:57Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: justintimbrlke
---
# justintimbrlke
<Gallery />
## Model description
justintimbrlke lora
## Trigger words
You should use `justintimbrlke` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/justintimbrlke/tree/main) them in the Files & versions tab.
|
Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q8_0-GGUF
|
Triangle104
| 2025-02-04T04:45:21Z | 19 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:nbeerbower/GreatFirewall-DPO",
"dataset:nbeerbower/Schule-DPO",
"dataset:nbeerbower/Purpura-DPO",
"dataset:nbeerbower/Arkhaios-DPO",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:flammenai/Date-DPO-NoAsterisks",
"dataset:flammenai/Prude-Phi3-DPO",
"dataset:Atsunori/HelpSteer2-DPO",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/Dumpling-Qwen2.5-1.5B-v2",
"base_model:quantized:nbeerbower/Dumpling-Qwen2.5-1.5B-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T04:44:16Z |
---
library_name: transformers
license: apache-2.0
datasets:
- nbeerbower/GreatFirewall-DPO
- nbeerbower/Schule-DPO
- nbeerbower/Purpura-DPO
- nbeerbower/Arkhaios-DPO
- jondurbin/truthy-dpo-v0.1
- antiven0m/physical-reasoning-dpo
- flammenai/Date-DPO-NoAsterisks
- flammenai/Prude-Phi3-DPO
- Atsunori/HelpSteer2-DPO
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
base_model: nbeerbower/Dumpling-Qwen2.5-1.5B-v2
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q8_0-GGUF
This model was converted to GGUF format from [`nbeerbower/Dumpling-Qwen2.5-1.5B-v2`](https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-1.5B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-1.5B-v2) for more details on the model.
---
nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B finetuned on:
nbeerbower/GreatFirewall-DPO
nbeerbower/Schule-DPO
nbeerbower/Purpura-DPO
nbeerbower/Arkhaios-DPO
jondurbin/truthy-dpo-v0.1
antiven0m/physical-reasoning-dpo
flammenai/Date-DPO-NoAsterisks
flammenai/Prude-Phi3-DPO
Atsunori/HelpSteer2-DPO (1,000 samples)
jondurbin/gutenberg-dpo-v0.1
nbeerbower/gutenberg2-dpo
nbeerbower/gutenberg-moderne-dpo.
Method
QLoRA ORPO tune with 2x RTX 3090 for 2 epochs.
# QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch_dtype,
bnb_4bit_use_double_quant=True,
)
# LoRA config
peft_config = LoraConfig(
r=64,
lora_alpha=64,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)
# Training config
orpo_args = ORPOConfig(
run_name=new_model,
learning_rate=2e-5,
lr_scheduler_type="linear",
max_length=2048,
max_prompt_length=1024,
max_completion_length=1024,
beta=0.1,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=8,
optim="paged_adamw_8bit",
num_train_epochs=2,
evaluation_strategy="steps",
eval_steps=0.2,
logging_steps=1,
warmup_steps=10,
max_grad_norm=10,
report_to="wandb",
output_dir="./results/",
bf16=True,
)
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q8_0-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q8_0-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q8_0-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q8_0-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q8_0.gguf -c 2048
```
|
Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q6_K-GGUF
|
Triangle104
| 2025-02-04T04:44:34Z | 18 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:nbeerbower/GreatFirewall-DPO",
"dataset:nbeerbower/Schule-DPO",
"dataset:nbeerbower/Purpura-DPO",
"dataset:nbeerbower/Arkhaios-DPO",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:flammenai/Date-DPO-NoAsterisks",
"dataset:flammenai/Prude-Phi3-DPO",
"dataset:Atsunori/HelpSteer2-DPO",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/Dumpling-Qwen2.5-1.5B-v2",
"base_model:quantized:nbeerbower/Dumpling-Qwen2.5-1.5B-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T04:43:23Z |
---
library_name: transformers
license: apache-2.0
datasets:
- nbeerbower/GreatFirewall-DPO
- nbeerbower/Schule-DPO
- nbeerbower/Purpura-DPO
- nbeerbower/Arkhaios-DPO
- jondurbin/truthy-dpo-v0.1
- antiven0m/physical-reasoning-dpo
- flammenai/Date-DPO-NoAsterisks
- flammenai/Prude-Phi3-DPO
- Atsunori/HelpSteer2-DPO
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
base_model: nbeerbower/Dumpling-Qwen2.5-1.5B-v2
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q6_K-GGUF
This model was converted to GGUF format from [`nbeerbower/Dumpling-Qwen2.5-1.5B-v2`](https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-1.5B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-1.5B-v2) for more details on the model.
---
nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B finetuned on:
nbeerbower/GreatFirewall-DPO
nbeerbower/Schule-DPO
nbeerbower/Purpura-DPO
nbeerbower/Arkhaios-DPO
jondurbin/truthy-dpo-v0.1
antiven0m/physical-reasoning-dpo
flammenai/Date-DPO-NoAsterisks
flammenai/Prude-Phi3-DPO
Atsunori/HelpSteer2-DPO (1,000 samples)
jondurbin/gutenberg-dpo-v0.1
nbeerbower/gutenberg2-dpo
nbeerbower/gutenberg-moderne-dpo.
Method
QLoRA ORPO tune with 2x RTX 3090 for 2 epochs.
# QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch_dtype,
bnb_4bit_use_double_quant=True,
)
# LoRA config
peft_config = LoraConfig(
r=64,
lora_alpha=64,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)
# Training config
orpo_args = ORPOConfig(
run_name=new_model,
learning_rate=2e-5,
lr_scheduler_type="linear",
max_length=2048,
max_prompt_length=1024,
max_completion_length=1024,
beta=0.1,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=8,
optim="paged_adamw_8bit",
num_train_epochs=2,
evaluation_strategy="steps",
eval_steps=0.2,
logging_steps=1,
warmup_steps=10,
max_grad_norm=10,
report_to="wandb",
output_dir="./results/",
bf16=True,
)
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q6_K-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q6_K-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q6_K-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q6_K-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q6_k.gguf -c 2048
```
|
daniel40/f3d37493-6712-4baf-854f-7ac9e8904b26
|
daniel40
| 2025-02-04T04:42:22Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T04:41:39Z |
---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f3d37493-6712-4baf-854f-7ac9e8904b26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 433b1171462ef288_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/433b1171462ef288_train_data.json
type:
field_input: critic_prompt
field_instruction: init_prompt
field_output: init_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/f3d37493-6712-4baf-854f-7ac9e8904b26
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/433b1171462ef288_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8608c2ec-a087-435a-9278-1ca3f3049fce
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8608c2ec-a087-435a-9278-1ca3f3049fce
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f3d37493-6712-4baf-854f-7ac9e8904b26
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 6.9361 |
| 6.9246 | 0.0094 | 50 | 6.9217 |
| 6.8964 | 0.0189 | 100 | 6.8941 |
| 6.8915 | 0.0283 | 150 | 6.8914 |
| 6.8897 | 0.0377 | 200 | 6.8911 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kk-aivio/3bb3d5d3-94ee-4f35-931b-fb35b74b94e4
|
kk-aivio
| 2025-02-04T04:42:06Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b",
"base_model:adapter:unsloth/gemma-2-9b",
"license:gemma",
"region:us"
] | null | 2025-02-04T04:09:44Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3bb3d5d3-94ee-4f35-931b-fb35b74b94e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aca1347c2eff58c3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aca1347c2eff58c3_train_data.json
type:
field_instruction: question_text
field_output: document_plaintext
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/3bb3d5d3-94ee-4f35-931b-fb35b74b94e4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/aca1347c2eff58c3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 36088511-e20e-40ed-8fa3-5090e5d7f560
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 36088511-e20e-40ed-8fa3-5090e5d7f560
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3bb3d5d3-94ee-4f35-931b-fb35b74b94e4
This model is a fine-tuned version of [unsloth/gemma-2-9b](https://huggingface.co/unsloth/gemma-2-9b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.9006 |
| 1.5986 | 0.0033 | 50 | 1.7371 |
| 1.7834 | 0.0065 | 100 | 1.7245 |
| 1.5522 | 0.0098 | 150 | 1.7189 |
| 1.7581 | 0.0130 | 200 | 1.7177 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q4_K_M-GGUF
|
Triangle104
| 2025-02-04T04:41:29Z | 18 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:nbeerbower/GreatFirewall-DPO",
"dataset:nbeerbower/Schule-DPO",
"dataset:nbeerbower/Purpura-DPO",
"dataset:nbeerbower/Arkhaios-DPO",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:flammenai/Date-DPO-NoAsterisks",
"dataset:flammenai/Prude-Phi3-DPO",
"dataset:Atsunori/HelpSteer2-DPO",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/Dumpling-Qwen2.5-1.5B-v2",
"base_model:quantized:nbeerbower/Dumpling-Qwen2.5-1.5B-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T04:40:04Z |
---
library_name: transformers
license: apache-2.0
datasets:
- nbeerbower/GreatFirewall-DPO
- nbeerbower/Schule-DPO
- nbeerbower/Purpura-DPO
- nbeerbower/Arkhaios-DPO
- jondurbin/truthy-dpo-v0.1
- antiven0m/physical-reasoning-dpo
- flammenai/Date-DPO-NoAsterisks
- flammenai/Prude-Phi3-DPO
- Atsunori/HelpSteer2-DPO
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
base_model: nbeerbower/Dumpling-Qwen2.5-1.5B-v2
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Dumpling-Qwen2.5-1.5B-v2`](https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-1.5B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-1.5B-v2) for more details on the model.
---
nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B finetuned on:
nbeerbower/GreatFirewall-DPO
nbeerbower/Schule-DPO
nbeerbower/Purpura-DPO
nbeerbower/Arkhaios-DPO
jondurbin/truthy-dpo-v0.1
antiven0m/physical-reasoning-dpo
flammenai/Date-DPO-NoAsterisks
flammenai/Prude-Phi3-DPO
Atsunori/HelpSteer2-DPO (1,000 samples)
jondurbin/gutenberg-dpo-v0.1
nbeerbower/gutenberg2-dpo
nbeerbower/gutenberg-moderne-dpo.
Method
QLoRA ORPO tune with 2x RTX 3090 for 2 epochs.
# QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch_dtype,
bnb_4bit_use_double_quant=True,
)
# LoRA config
peft_config = LoraConfig(
r=64,
lora_alpha=64,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)
# Training config
orpo_args = ORPOConfig(
run_name=new_model,
learning_rate=2e-5,
lr_scheduler_type="linear",
max_length=2048,
max_prompt_length=1024,
max_completion_length=1024,
beta=0.1,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=8,
optim="paged_adamw_8bit",
num_train_epochs=2,
evaluation_strategy="steps",
eval_steps=0.2,
logging_steps=1,
warmup_steps=10,
max_grad_norm=10,
report_to="wandb",
output_dir="./results/",
bf16=True,
)
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q4_K_M-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q4_K_M-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q4_K_M-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dumpling-Qwen2.5-1.5B-v2-Q4_K_M-GGUF --hf-file dumpling-qwen2.5-1.5b-v2-q4_k_m.gguf -c 2048
```
|
nat-hunt/ede49b2d-ad94-4a67-94d1-b18e7338e541
|
nat-hunt
| 2025-02-04T04:40:21Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b",
"base_model:adapter:unsloth/gemma-2-9b",
"license:gemma",
"region:us"
] | null | 2025-02-04T04:08:45Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ede49b2d-ad94-4a67-94d1-b18e7338e541
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aca1347c2eff58c3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aca1347c2eff58c3_train_data.json
type:
field_instruction: question_text
field_output: document_plaintext
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/ede49b2d-ad94-4a67-94d1-b18e7338e541
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/aca1347c2eff58c3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 36088511-e20e-40ed-8fa3-5090e5d7f560
wandb_project: Birthday-SN56-25-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 36088511-e20e-40ed-8fa3-5090e5d7f560
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ede49b2d-ad94-4a67-94d1-b18e7338e541
This model is a fine-tuned version of [unsloth/gemma-2-9b](https://huggingface.co/unsloth/gemma-2-9b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.9006 |
| 1.5985 | 0.0033 | 50 | 1.7378 |
| 1.7818 | 0.0065 | 100 | 1.7249 |
| 1.5529 | 0.0098 | 150 | 1.7193 |
| 1.7582 | 0.0130 | 200 | 1.7181 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kk-aivio/02077c80-9512-4be2-b82a-78f6d7bde85b
|
kk-aivio
| 2025-02-04T04:36:36Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T04:28:49Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 02077c80-9512-4be2-b82a-78f6d7bde85b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 02077c80-9512-4be2-b82a-78f6d7bde85b
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fatihfauzan26/PEGASUS_super
|
fatihfauzan26
| 2025-02-04T04:35:38Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"summarization",
"id",
"dataset:fajrikoto/id_liputan6",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2025-02-04T03:41:26Z |
---
license: mit
datasets:
- fajrikoto/id_liputan6
language:
- id
metrics:
- rouge
base_model:
- google/pegasus-cnn_dailymail
pipeline_tag: summarization
library_name: transformers
---
PEGASUS Mini is a fine-tuned version of the PEGASUS model, originally pre-trained on the CNN/Daily Mail dataset. This fine-tuning is specifically tailored for abstractive text summarization of Indonesian news articles using the Liputan6 dataset.
The model has been trained on a subset of 50,000 samples from the Liputan6 dataset for 3 epochs, with 256 min length input and max length target making it lightweight and efficient while maintaining strong summarization performance.
|
antimage88/66c083b0-dbd5-4bc2-900d-04dd2cba9af2
|
antimage88
| 2025-02-04T04:35:14Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T03:57:07Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 66c083b0-dbd5-4bc2-900d-04dd2cba9af2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ceac57436127cc6c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ceac57436127cc6c_train_data.json
type:
field_input: ''
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: antimage88/66c083b0-dbd5-4bc2-900d-04dd2cba9af2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/ceac57436127cc6c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cc891c4e-9b2c-4c32-93f8-b418eb54f13f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cc891c4e-9b2c-4c32-93f8-b418eb54f13f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 66c083b0-dbd5-4bc2-900d-04dd2cba9af2
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 167
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7386 | 0.9985 | 166 | 1.7940 |
| 3.2628 | 1.0045 | 167 | 1.8003 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
trenden/cae9c09a-b7bd-482b-980b-ec51d27d9936
|
trenden
| 2025-02-04T04:32:47Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:lcw99/zephykor-ko-7b-chang",
"base_model:adapter:lcw99/zephykor-ko-7b-chang",
"region:us"
] | null | 2025-02-04T04:27:30Z |
---
library_name: peft
base_model: lcw99/zephykor-ko-7b-chang
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cae9c09a-b7bd-482b-980b-ec51d27d9936
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# cae9c09a-b7bd-482b-980b-ec51d27d9936
This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrferr3t/aad9b8b9-407f-420e-9e0b-0a3f294955cb
|
mrferr3t
| 2025-02-04T04:32:44Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-64k",
"region:us"
] | null | 2025-02-04T03:31:07Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aad9b8b9-407f-420e-9e0b-0a3f294955cb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: NousResearch/Yarn-Llama-2-7b-64k
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 9702554f26460ac5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9702554f26460ac5_train_data.json
type:
field_input: ingredients
field_instruction: method
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/aad9b8b9-407f-420e-9e0b-0a3f294955cb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/9702554f26460ac5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 90e1caee-8148-4b00-a510-c0c50a07f653
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 90e1caee-8148-4b00-a510-c0c50a07f653
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aad9b8b9-407f-420e-9e0b-0a3f294955cb
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 482
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0013 | 1 | 2.7332 |
| No log | 0.0517 | 40 | 1.7939 |
| No log | 0.1035 | 80 | 1.1105 |
| 3.4989 | 0.1552 | 120 | 0.9959 |
| 3.4989 | 0.2070 | 160 | 0.9604 |
| 1.9903 | 0.2587 | 200 | 0.9362 |
| 1.9903 | 0.3105 | 240 | 0.9447 |
| 1.9903 | 0.3622 | 280 | 0.9262 |
| 1.901 | 0.4140 | 320 | 0.9183 |
| 1.901 | 0.4657 | 360 | 0.9293 |
| 1.8652 | 0.5175 | 400 | 0.9051 |
| 1.8652 | 0.5692 | 440 | 0.9050 |
| 1.8652 | 0.6210 | 480 | 0.8929 |
| 1.8236 | 0.6727 | 520 | 0.9039 |
| 1.8236 | 0.7245 | 560 | 0.9017 |
| 1.8028 | 0.7762 | 600 | 0.8815 |
| 1.8028 | 0.8279 | 640 | 0.8786 |
| 1.8028 | 0.8797 | 680 | 0.8734 |
| 1.7622 | 0.9314 | 720 | 0.8626 |
| 1.7622 | 0.9832 | 760 | 0.8451 |
| 1.5765 | 1.0349 | 800 | 0.8362 |
| 1.5765 | 1.0867 | 840 | 0.8408 |
| 1.5765 | 1.1384 | 880 | 0.8425 |
| 1.3617 | 1.1902 | 920 | 0.8341 |
| 1.3617 | 1.2419 | 960 | 0.8333 |
| 1.4099 | 1.2937 | 1000 | 0.8296 |
| 1.4099 | 1.3454 | 1040 | 0.8173 |
| 1.4099 | 1.3972 | 1080 | 0.8156 |
| 1.426 | 1.4489 | 1120 | 0.8179 |
| 1.426 | 1.5006 | 1160 | 0.8093 |
| 1.4494 | 1.5524 | 1200 | 0.8030 |
| 1.4494 | 1.6041 | 1240 | 0.8015 |
| 1.4494 | 1.6559 | 1280 | 0.7961 |
| 1.3823 | 1.7076 | 1320 | 0.7912 |
| 1.3823 | 1.7594 | 1360 | 0.7750 |
| 1.3616 | 1.8111 | 1400 | 0.7535 |
| 1.3616 | 1.8629 | 1440 | 0.7615 |
| 1.3616 | 1.9146 | 1480 | 0.7604 |
| 1.3261 | 1.9664 | 1520 | 0.7544 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cimol/764c6312-a647-4eb4-b094-2706e70afdbc
|
cimol
| 2025-02-04T04:32:21Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-02-04T04:03:19Z |
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 764c6312-a647-4eb4-b094-2706e70afdbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- c2fee9c78f1574ee_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c2fee9c78f1574ee_train_data.json
type:
field_input: Description
field_instruction: Patient
field_output: Doctor
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cimol/764c6312-a647-4eb4-b094-2706e70afdbc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 7.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.04
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/c2fee9c78f1574ee_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 059bd8ea-8d4e-42bf-ae21-2eb2b22407b3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 059bd8ea-8d4e-42bf-ae21-2eb2b22407b3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 764c6312-a647-4eb4-b094-2706e70afdbc
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.781 | 0.0043 | 1 | 3.2556 |
| 2.41 | 0.2157 | 50 | 2.4527 |
| 2.3489 | 0.4315 | 100 | 2.3260 |
| 2.0638 | 0.6472 | 150 | 2.2409 |
| 1.7407 | 0.8630 | 200 | 2.2356 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Omerhan/checkpoint-78-ucsahin
|
Omerhan
| 2025-02-04T04:31:54Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4997",
"loss:MultipleNegativesRankingLoss",
"tr",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-04T04:30:45Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4997
- loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-large-instruct
widget:
- source_sentence: BYU'nun ΓΆΔrenci bedeni, Pres dΓΆneminde ne kadar arttΔ±. Ernest L.
Wilkinson zamanΔ±n en bΓΌyΓΌk ΓΆzel okulu mu olacak?
sentences:
- Ernest L. Wilkinson dΓΆneminde BYU'nun ΓΆΔrenci vΓΌcudu altΔ± kat arttΔ±. DolayΔ±sΔ±yla,
o zamanlar dΓΆnemin en bΓΌyΓΌk ΓΆzel okulu haline gelmiΕtir.
- 'Cevap: Falkland AdalarΔ±''nΔ±n para birimi Falkland sterlini (FKP)''dir.'
- Franklin S. Harris 1921 yΔ±lΔ±nda ΓΌniversitenin baΕkanlΔ±ΔΔ±na atandΔ±. Doktora derecesine
sahip ilk BYU baΕkanΔ± oldu. Harris okulda birkaΓ§ ΓΆnemli deΔiΕiklik yaptΔ± ve onu
gerΓ§ek bir ΓΌniversite haline getirdi, oysa daha ΓΆnce organizasyonunun Akademi
gΓΌnlerinden kalΔ±ntΔ±larΔ± vardΔ±. GΓΆrev sΓΌresinin baΕΔ±nda, okul herhangi bir akreditasyon
organizasyonu tarafΔ±ndan resmi olarak tanΔ±nmadΔ±. DΓΆnem sonunda, okul o sΔ±rada
tΓΌm bΓΌyΓΌk akreditasyon organizasyonlarΔ± altΔ±nda akredite edilmiΕtir. Nihayetinde
Kaliforniya Γniversitesi'nden doktorasΔ±nΔ± alan Howard S. McDonald tarafΔ±ndan deΔiΕtirildi.
Bu pozisyonu ilk aldΔ±ΔΔ±nda, Δ°kinci DΓΌnya SavaΕΔ± yeni sona ermiΕti ve binlerce
ΓΆΔrenci BYU'ya su basΔ±yordu. KalΔ±ΕΔ±nΔ±n sonunda, okul 5.440 ΓΆΔrencinin kaydΔ±na
neredeyse beΕ kat bΓΌyΓΌmΓΌΕtΓΌ. Γniversitenin bΓΆyle bΓΌyΓΌk bir akΔ±nΔ± idare edebilecek
tesisleri yoktu, bu yΓΌzden Ogden, Utah'daki bir Hava Kuvvetleri ΓssΓΌ'nΓΌn bir kΔ±smΔ±nΔ±
satΔ±n aldΔ± ve bazΔ± ΓΆΔrencileri barΔ±ndΔ±rmak iΓ§in yeniden inΕa etti. Bir sonraki
baΕkan, Ernest L. Wilkinson, okulun hΔ±zlandΔ±rΔ±lmΔ±Ε bir inΕaat programΔ±nΔ± benimsemesiyle
yoΔun bir bΓΌyΓΌme dΓΆnemini de yΓΆnetti. Wilkinson, kampΓΌsteki seksenden fazla yapΔ±yΔ±
inΕa etmekten sorumluydu. BirΓ§oΔu hala ayakta. GΓΆrev sΓΌresi boyunca ΓΆΔrenci vΓΌcudu
altΔ± kat arttΔ± ve BYU'yu o zamanlar en bΓΌyΓΌk ΓΆzel okul haline getirdi. ΓΔrencilerin
kalitesi de arttΔ± ve okulda yΓΌksek eΔitim standartlarΔ±na yol aΓ§tΔ±. Son olarak,
Wilkinson kampΓΌsteki LDS Kilisesi birimlerini yeniden dΓΌzenledi ve yΓΆnetimi sΔ±rasΔ±nda
on kazΔ±k ve 100'den fazla koΔuΕ eklendi.
- source_sentence: PolitikacΔ±lar hakkΔ±nda aΕaΔΔ±daki paragraf gΓΆz ΓΆnΓΌne alΔ±ndΔ±ΔΔ±nda,
hayatta kalan ve Δ°rlanda Avam KamarasΔ± ΓΌyesi olan son kiΕi kimdi?
sentences:
- Metne gΓΆre, The Times gazetesinin kurucusunun torunu olan ve 1847'de babasΔ±nΔ±n
yerini alan kiΕinin adΔ± John Walter'dΔ±r.
- Hayatta kalan ve Δ°rlanda Avam KamarasΔ± ΓΌyesi olan son kiΕi Sir Thomas Staples,
9. Baronet'di.
- Sir Thomas Staples, 9. Baronet (31 Temmuz 1775 - 14 MayΔ±s 1865) Δ°ngiliz-Δ°rlandalΔ±
bir politikacΔ± ve avukattΔ±. Δ°rlanda Avam KamarasΔ± ΓΌyesi olan hayatta kalan son
kiΕiydi, ancak kΔ±sa bir sΓΌre Meclis'te bulunmuΕtu.
- source_sentence: Hangi Ada 1308 yΔ±lΔ±nda alΔ±nmΔ±ΕtΔ±r.
sentences:
- Raleigh'deki devlet okullarΔ±nΔ± Wake County Devlet Okulu Sistemi iΕletmektedir.
- 1308 yΔ±lΔ±nda Δ°mralΔ± AdasΔ± alΔ±nmΔ±ΕtΔ±r.
- Osman Bey 1258 yΔ±lΔ±nda SΓΆΔΓΌtβte doΔdu. Osman Bey 1 AΔustos 1326βda Bursaβda hayatΔ±nΔ±
kaybetmiΕtir.1281 yΔ±lΔ±nda Osman Bey 23 yaΕΔ±nda iken Ahi teΕkilatΔ±ndan olan Εeyh
Edebaliβnin kΔ±zΔ± Malhun Hatun ile evlendi.Bu evlilikten daha sonra OsmanlΔ± Devletiβnin
baΕΔ±na geΓ§ecek olan Orhan Gazi doΔdu.1281 yΔ±lΔ±nda Osman Beyin babasΔ± ErtuΔrul
Bey 90 yaΕΔ±nda vefat etmiΕtir.1326βda Osman Bey, BursaβyΔ± kuΕattΔ±. Fakat Osman
beyin rahatsΔ±zlanmasΔ± ΓΌzerine kuΕatmaya Orhan Bey devam etti. Bursa alΔ±ndΔ±ktan
sonra baΕkent yapΔ±lmΔ±ΕtΔ±r.Osman Gazi son yΔ±llarΔ±nda yaΕΔ±nΔ±n ilerlemesi ve gut
hastalΔ±ΔΔ± yΓΌzΓΌnden beylik idaresini oΔlu olan Orhan Bey'e bΔ±rakmΔ±ΕtΔ±.OsmanlΔ± BeyliΔinin
ilk fethettiΔi ada Δ°mralΔ± AdasΔ±dΔ±r. Δ°mralΔ± AdasΔ± 1308 yΔ±lΔ±nda Osman Bey tarafΔ±ndan
alΔ±nmΔ±ΕtΔ±r.Δ°lk OsmanlΔ± parasΔ± Osman Bey tarafΔ±ndan bakΔ±r olarak akΓ§e adΔ± ile 1324
yΔ±lΔ±nda bastΔ±rΔ±lmΔ±ΕtΔ±r.OsmanlΔ± BeyliΔinin ilk baΕkenti SΓΆΔΓΌttΓΌr.OsmanlΔ± tarihinde
ilk savaΕ, 1284 yΔ±lΔ±nda Bizans tekfurlarΔ±yla yapΔ±lan Ermeni Beli savaΕΔ±dΔ±r.Osman
Beyin ele geΓ§irdiΔi ilk kale 1285 yΔ±lΔ±nda fethedilen Kolca Hisar Kalesiβdir.OsmanlΔ±
beyliΔinin ilk kadΔ±sΔ± Osman Bey dΓΆneminde atanan Dursun Fakihβtir.Osman Bey 1288
yΔ±lΔ±nda KaracahisarΔ± fethetti. Osman Bey 1299 yΔ±lΔ±nda Bilecik'i fethetti.Osman
Gazi, babasΔ± ErtuΔrul Gazi'den yaklaΕΔ±k 4.800 kilometrekare olarak devraldΔ±ΔΔ±
OsmanlΔ± topraΔΔ±nΔ± oΔlu Orhan Gazi'ye 16.000 kilometrekare olarak devretmiΕtir.Osman
Bey'in vefatΔ± sonrasΔ± yerine Orhan Bey geΓ§ti.
- source_sentence: Tunakabuni'nin Γ§alΔ±ΕmalarΔ± ne konudadΔ±r?
sentences:
- Tunakabuni Γ§eΕitli tΔ±bbi ve dini konularda yazarlΔ±k yaptΔ±. O Arap ve Hint kaynaklarΔ±na
gΓΆre , 1679 yΔ±lΔ±nda basit ilaΓ§lar ve tΔ±bbi aletlerle ilgili Γ§alΔ±Εmalar yapmΔ±ΕtΔ±r.
O dΓΆnem, 1666-1694 yΔ±llarΔ±nda Δ°ran hΓΌkΓΌmdarΔ± SΓΌleyman Εah tarafΔ±ndan ona ithaf
edilmiΕtir.
- Tunakabuni'nin Γ§alΔ±ΕmalarΔ± tΔ±bbi ve dini konulardadΔ±r.
- Metinde verilen bilgiye gΓΆre, 2012-13 yΔ±lΔ±nda kamu harcamalarΔ± 28 milyon Β£ olarak
belirlenmiΕtir.
- source_sentence: Tibet mimarisi hangi iki kΓΌltΓΌrΓΌ yansΔ±tΔ±r?
sentences:
- 'Metinde belirtilenlere gΓΆre diΔer partilerin aldΔ±ΔΔ± oy oranlarΔ± aΕaΔΔ±daki gibidir:
- Quebec egemenlik yanlΔ±sΔ± Parti Quebecois (PQ): toplam oylarΔ±n %40.16''sΔ±nΔ± aldΔ±.
- Quebec Yeni Demokrat Partisi (NPDQ): toplam oylarΔ±n %1.22''sini aldΔ±.'
- Tibet mimarisi, Γin ve Hint kΓΌltΓΌrlerini yansΔ±tmaktadΔ±r.
- Tibet ekonomisi geΓ§im tarΔ±m hakimdir, ancak turizm son yΔ±llarda bΓΌyΓΌyen bir sanayi
haline gelmiΕtir. Tibet'te baskΔ±n din Tibet Budizm'dir; Buna ek olarak Tibet Budizm'e
benzer BΓΆn vardΔ±r ve Tibet MΓΌslΓΌmanlarΔ± ve HΔ±ristiyan azΔ±nlΔ±klar da vardΔ±r. Tibet
Budizmi, bΓΆlgenin sanat, mΓΌzik ve festivalleri ΓΌzerinde birincil bir etkidir.
Tibet mimarisi Γin ve Hint etkilerini yansΔ±tΔ±r. Tibet'teki zΔ±mba gΔ±dalarΔ± kavrulmuΕ
arpa, yak eti ve tereyaΔΔ± Γ§ayΔ±dΔ±r.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# intfloat-fine-tuned
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision c9e87c786ffac96aeaeb42863276930883923ecb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("Omerhan/checkpoint-78-ucsahin")
# Run inference
sentences = [
'Tibet mimarisi hangi iki kΓΌltΓΌrΓΌ yansΔ±tΔ±r?',
'Tibet mimarisi, Γin ve Hint kΓΌltΓΌrlerini yansΔ±tmaktadΔ±r.',
"Tibet ekonomisi geΓ§im tarΔ±m hakimdir, ancak turizm son yΔ±llarda bΓΌyΓΌyen bir sanayi haline gelmiΕtir. Tibet'te baskΔ±n din Tibet Budizm'dir; Buna ek olarak Tibet Budizm'e benzer BΓΆn vardΔ±r ve Tibet MΓΌslΓΌmanlarΔ± ve HΔ±ristiyan azΔ±nlΔ±klar da vardΔ±r. Tibet Budizmi, bΓΆlgenin sanat, mΓΌzik ve festivalleri ΓΌzerinde birincil bir etkidir. Tibet mimarisi Γin ve Hint etkilerini yansΔ±tΔ±r. Tibet'teki zΔ±mba gΔ±dalarΔ± kavrulmuΕ arpa, yak eti ve tereyaΔΔ± Γ§ayΔ±dΔ±r.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 4,997 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 16.36 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 33.39 tokens</li><li>max: 265 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 197.11 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Εehzade Selim kiminle akrabaydΔ±?</code> | <code>Εehzade Selim, Dulkadir Beyi AlaΓΌddevle Bozkurt Bey ile anne tarafΔ±ndan akrabaydΔ±.</code> | <code>Safevi Εah'Δ± Δ°smail 1507 yΔ±lΔ±nda hem Δ°stanbul'un hem de Kahire'nin gΓΆstereceΔi tepkiyi gΓΆrmek amacΔ±yla DulkadiroΔullarΔ± BeyliΔi'nin ΓΌzerine yΓΌrΓΌdΓΌ. AsΔ±l sebebi bu olmamakla beraber gΓΆrΓΌnΓΌΕteki sebep, Dulkadir Beyi AlaΓΌddevle Bozkurt Bey'in Εii olan Εah'a kΔ±zΔ±nΔ± vermek istememesiydi. Εah Δ°smail OsmanlΔ± topraklarΔ±ndan geΓ§erek Kayseri ΓΌzerinden Dulkadir topraklarΔ±na girdi.SavaΕta yenilen AlaΓΌddevle Bozkurt Bey kaΓ§tΔ± ve Εah Δ°smail Bey'in bir oΔlu ile iki torununu ele geΓ§irerek ΓΆldΓΌrttΓΌ. Bunun ΓΌzerine MaraΕ'a ve Elbistan'a giren Εah Δ°smail Dulkadir HanedanΔ±'nΔ±n mezarlarΔ±nΔ± yaktΔ±rdΔ±. Sonradan da OsmanlΔ± Devleti'ne bir mektup yazΔ±p topraklarΔ±nΔ± Γ§iΔnediΔinden dolayΔ± da ΓΆzΓΌr diledi. YΔ±llardan beri DulkadiroΔullarΔ± BeyliΔi'nin kendilerine baΔlΔ± olduΔunu iddia eden Memluklular ve OsmanlΔ±lar bu hareketi cevapsΔ±z bΔ±raktΔ±lar.Bu da Εah Δ°smail'in Anadolu'daki prestijini artΔ±rdΔ±. Memluklular tamamΔ±yla sessiz kalsa da OsmanlΔ±larΔ±n sessiz kalmalarΔ± mΓΌmkΓΌn deΔildi.Zira Trabzon sancak beyi Εehzade Selim, anne tarafΔ±ndan Dulkadir Beyi AlaΓΌddevle Bozkurt Bey ile akrabaydΔ±.Εehzade Selim ve Εehzade Korkut AlaΓΌddevle Bozkurt Bey'in kΔ±zΔ± olan aynΔ± anneden dΓΌnyaya gelmiΕti. Bir dayΔ±sΔ±na ve iki dayΔ± oΔluna yapΔ±lan bu harekete karΕΔ± Εehzade Selim Azerbaycan'a kadar Safevi topraklarΔ±na girerek Safevi HanedanΔ±'na mensup bazΔ± kiΕileri esir alΔ±p Trabzon'a getirerek dayΔ±sΔ±na yapΔ±lanΔ±n intikamΔ±nΔ± aldΔ±. BabasΔ± Bayezid bile hiΓ§bir Εey yapmamΔ±Εken Εehzade Selim' in bu hareketi gΓΆzlerin ona Γ§evrilmesine neden oldu. Bu arada II.Bayezid Εah Δ°smail'in herhangi bir seferine karΕΔ± Orta Anadolu'ya asker yΔ±ΔdΔ±.Bu nedenle Εah Δ°smail Anadolu'nun iΓ§lerine girmekten Γ§ekinmiΕtir. SayΔ±sΔ± 115 bini bulan bu orduyu gΓΆzΓΌne kestiremeyen Εah, II. Bayezid'e ΕanlΔ± bΓΌyΓΌk babam diye hitap ettiΔi bir mektup yazarak 1508 yΔ±llarΔ±nΔ±n ilk aylarΔ±nda DiyarbakΔ±r'a Γ§ekildi.</code> |
| <code>Δ°ngilizler hangi yΔ±lda DerviΕeleri yendi?</code> | <code>Δ°ngilizler, DerviΕler'i 1920 yΔ±lΔ±nda yendi.</code> | <code>19. yΓΌzyΔ±lΔ±n sonlarΔ±nda, Berlin konferansΔ± sona erdikten sonra AvrupalΔ± imparatorluklar ordularΔ±yla Afrika Boynuzu'na yelken aΓ§tΔ±lar. Somali ΓΌzerinde titreyen imparatorluk bulutlarΔ±, Afrika Boynuzu'ndan Somali askerlerini bir araya getiren ve Εimdiye kadarki en uzun sΓΆmΓΌrge karΕΔ±tΔ± savaΕlardan birini baΕlatan DerviΕ lideri Muhammed Abdullah Hassan'Δ± alarma geΓ§irdi. DerviΕ Devleti Δ°ngiliz imparatorluΔunu dΓΆrt kez baΕarΔ±yla pΓΌskΓΌrttΓΌ ve kΔ±yΔ± bΓΆlgesine geri Γ§ekilmeye zorladΔ±. DerviΕ Devleti Δ°ngilizlere karΕΔ± baΕarΔ±larΔ±nΔ±n bir sonucu olarak OsmanlΔ± ve Alman imparatorluklarΔ±ndan destek aldΔ±. TΓΌrkler Somali ulusundan Hasan Emir'i de seΓ§tiler ve Almanlar DerviΕlerin elde edeceΔi her bΓΆlgeyi resmen tanΔ±maya sΓΆz verdiler. Γeyrek asΔ±rlΔ±k Δ°ngilizleri kΓΆrfezde tuttuktan sonra, DerviΕler sonunda 1920'de yenildi, Δ°ngiltere'nin Afrika'da ilk kez DerviΕ baΕkenti Taleex'i bombalamak iΓ§in uΓ§aklarΔ± kullandΔ±. Bu bombardΔ±man sonucunda eski DerviΕ topraklarΔ± Britanya'nΔ±n himayesine dΓΆnΓΌΕtΓΌ. Δ°talya benzer Εekilde Somali SultanlarΔ± ve ordulardan aynΔ± muhalefetle karΕΔ± karΕΔ±ya kaldΔ± ve 1927'nin sonlarΔ±nda FaΕist dΓΆneme kadar modern Somali'nin parΓ§alarΔ±nΔ±n tam kontrolΓΌnΓΌ elde edemedi. Bu iΕgal 1941 yΔ±lΔ±na kadar sΓΌrdΓΌ ve yerini Δ°ngiliz askeri idaresi aldΔ±.</code> |
| <code>βpost-punkβ terimini ilk kullanan kimdi?</code> | <code>Metinde belirtilen bilgilere gΓΆre, "post-punk" terimini ilk kullananlarΔ±n gazeteciler olduΔu belirtilmiΕtir. Ancak metinde terimin ilk kullanΔ±mΔ±nΔ± yapan gazetecinin kim olduΔu belirtilmemiΕtir.</code> | <code>βpost-punkβ terimi ilk olarak 1970'lerin sonlarΔ±nda gazeteciler tarafΔ±ndan punk'Δ±n sonik Εablonunun ΓΆtesine geΓ§en gruplarΔ± farklΔ± bΓΆlgelere tanΔ±mlamak iΓ§in kullanΔ±ldΔ±. BaΕlangΔ±Γ§ta punk'Δ±n DIY etiΔi ve enerjisinden esinlenen bu sanatΓ§Δ±larΔ±n Γ§oΔu, sonuΓ§ta stil ve hareketle hayal kΔ±rΔ±klΔ±ΔΔ±na uΔradΔ± ve ticari formΓΌle, rock kongresi ve ΓΆz parodisine dΓΌΕtΓΌΔΓΌnΓΌ hissetti. PopΓΌlist iddialarΔ±nΔ± eriΕilebilirlik ve ham basitliΔe karΕΔ± reddettiler, bunun yerine mΓΌzikal geleneΔi kΔ±rma, sΔ±radan yerleri alt etme ve izleyicilere meydan okuma fΔ±rsatΔ± gΓΆrdΓΌler. SanatΓ§Δ±lar bΓΌyΓΌk ΓΆlΓ§ΓΌde beyaz kaygΔ±larΔ± ΓΌzerinde punk odak ΓΆtesine taΕΔ±ndΔ±, erkek, iΕΓ§i sΔ±nΔ±fΔ± nΓΌfus ve kurulan rock and roll tropes onun sΓΌrekli gΓΌven terk, BΓΆyle ΓΌΓ§ akor ilerlemeler ve Chuck Berry tabanlΔ± gitar riffs gibi. Bu sanatΓ§Δ±lar bunun yerine βradikal iΓ§eriΔin radikal bir form gerektirdiΔineβ inanarak punk'Δ± βsΓΌrekli deΔiΕimin bir zorunluluΔuβ olarak tanΔ±mladΔ±lar.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `gradient_accumulation_steps`: 8
- `learning_rate`: 1e-06
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.01
- `tf32`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
saifrahmed/grizzabella-net
|
saifrahmed
| 2025-02-04T04:31:45Z | 7 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-04T04:31:43Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: GRIZZABELLA_FUR_BABY
---
# Grizzabella Net
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `GRIZZABELLA_FUR_BABY` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('saifrahmed/grizzabella-net', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
zzoming/Gemma-Ko-7B-SFT-MODEL
|
zzoming
| 2025-02-04T04:31:17Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T04:22:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlfoundations-dev/llama3-1_8b_r1_annotated_olympiads
|
mlfoundations-dev
| 2025-02-04T04:31:04Z | 3,728 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T21:16:01Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-1_8b_r1_annotated_olympiads
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-1_8b_r1_annotated_olympiads
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/r1_annotated_olympiads dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
minhnguyennnnnn/5b95903b-6ab0-44c3-acf9-6de2fef25c7b
|
minhnguyennnnnn
| 2025-02-04T04:30:26Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:30:03Z |
---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5b95903b-6ab0-44c3-acf9-6de2fef25c7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8f23d0c27dcb0f9f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8f23d0c27dcb0f9f_train_data.json
type:
field_input: evidence
field_instruction: user_input
field_output: claim
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhnguyennnnnn/5b95903b-6ab0-44c3-acf9-6de2fef25c7b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8f23d0c27dcb0f9f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: afeef3dd-1e46-4c12-b26d-35001f70da6e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: afeef3dd-1e46-4c12-b26d-35001f70da6e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5b95903b-6ab0-44c3-acf9-6de2fef25c7b
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8887 | 0.0035 | 200 | 0.9616 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Omerhan/checkpoint-60-ucsahin
|
Omerhan
| 2025-02-04T04:29:56Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4997",
"loss:MultipleNegativesRankingLoss",
"tr",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-04T04:28:46Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4997
- loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-large-instruct
widget:
- source_sentence: BYU'nun ΓΆΔrenci bedeni, Pres dΓΆneminde ne kadar arttΔ±. Ernest L.
Wilkinson zamanΔ±n en bΓΌyΓΌk ΓΆzel okulu mu olacak?
sentences:
- Ernest L. Wilkinson dΓΆneminde BYU'nun ΓΆΔrenci vΓΌcudu altΔ± kat arttΔ±. DolayΔ±sΔ±yla,
o zamanlar dΓΆnemin en bΓΌyΓΌk ΓΆzel okulu haline gelmiΕtir.
- 'Cevap: Falkland AdalarΔ±''nΔ±n para birimi Falkland sterlini (FKP)''dir.'
- Franklin S. Harris 1921 yΔ±lΔ±nda ΓΌniversitenin baΕkanlΔ±ΔΔ±na atandΔ±. Doktora derecesine
sahip ilk BYU baΕkanΔ± oldu. Harris okulda birkaΓ§ ΓΆnemli deΔiΕiklik yaptΔ± ve onu
gerΓ§ek bir ΓΌniversite haline getirdi, oysa daha ΓΆnce organizasyonunun Akademi
gΓΌnlerinden kalΔ±ntΔ±larΔ± vardΔ±. GΓΆrev sΓΌresinin baΕΔ±nda, okul herhangi bir akreditasyon
organizasyonu tarafΔ±ndan resmi olarak tanΔ±nmadΔ±. DΓΆnem sonunda, okul o sΔ±rada
tΓΌm bΓΌyΓΌk akreditasyon organizasyonlarΔ± altΔ±nda akredite edilmiΕtir. Nihayetinde
Kaliforniya Γniversitesi'nden doktorasΔ±nΔ± alan Howard S. McDonald tarafΔ±ndan deΔiΕtirildi.
Bu pozisyonu ilk aldΔ±ΔΔ±nda, Δ°kinci DΓΌnya SavaΕΔ± yeni sona ermiΕti ve binlerce
ΓΆΔrenci BYU'ya su basΔ±yordu. KalΔ±ΕΔ±nΔ±n sonunda, okul 5.440 ΓΆΔrencinin kaydΔ±na
neredeyse beΕ kat bΓΌyΓΌmΓΌΕtΓΌ. Γniversitenin bΓΆyle bΓΌyΓΌk bir akΔ±nΔ± idare edebilecek
tesisleri yoktu, bu yΓΌzden Ogden, Utah'daki bir Hava Kuvvetleri ΓssΓΌ'nΓΌn bir kΔ±smΔ±nΔ±
satΔ±n aldΔ± ve bazΔ± ΓΆΔrencileri barΔ±ndΔ±rmak iΓ§in yeniden inΕa etti. Bir sonraki
baΕkan, Ernest L. Wilkinson, okulun hΔ±zlandΔ±rΔ±lmΔ±Ε bir inΕaat programΔ±nΔ± benimsemesiyle
yoΔun bir bΓΌyΓΌme dΓΆnemini de yΓΆnetti. Wilkinson, kampΓΌsteki seksenden fazla yapΔ±yΔ±
inΕa etmekten sorumluydu. BirΓ§oΔu hala ayakta. GΓΆrev sΓΌresi boyunca ΓΆΔrenci vΓΌcudu
altΔ± kat arttΔ± ve BYU'yu o zamanlar en bΓΌyΓΌk ΓΆzel okul haline getirdi. ΓΔrencilerin
kalitesi de arttΔ± ve okulda yΓΌksek eΔitim standartlarΔ±na yol aΓ§tΔ±. Son olarak,
Wilkinson kampΓΌsteki LDS Kilisesi birimlerini yeniden dΓΌzenledi ve yΓΆnetimi sΔ±rasΔ±nda
on kazΔ±k ve 100'den fazla koΔuΕ eklendi.
- source_sentence: PolitikacΔ±lar hakkΔ±nda aΕaΔΔ±daki paragraf gΓΆz ΓΆnΓΌne alΔ±ndΔ±ΔΔ±nda,
hayatta kalan ve Δ°rlanda Avam KamarasΔ± ΓΌyesi olan son kiΕi kimdi?
sentences:
- Metne gΓΆre, The Times gazetesinin kurucusunun torunu olan ve 1847'de babasΔ±nΔ±n
yerini alan kiΕinin adΔ± John Walter'dΔ±r.
- Hayatta kalan ve Δ°rlanda Avam KamarasΔ± ΓΌyesi olan son kiΕi Sir Thomas Staples,
9. Baronet'di.
- Sir Thomas Staples, 9. Baronet (31 Temmuz 1775 - 14 MayΔ±s 1865) Δ°ngiliz-Δ°rlandalΔ±
bir politikacΔ± ve avukattΔ±. Δ°rlanda Avam KamarasΔ± ΓΌyesi olan hayatta kalan son
kiΕiydi, ancak kΔ±sa bir sΓΌre Meclis'te bulunmuΕtu.
- source_sentence: Hangi Ada 1308 yΔ±lΔ±nda alΔ±nmΔ±ΕtΔ±r.
sentences:
- Raleigh'deki devlet okullarΔ±nΔ± Wake County Devlet Okulu Sistemi iΕletmektedir.
- 1308 yΔ±lΔ±nda Δ°mralΔ± AdasΔ± alΔ±nmΔ±ΕtΔ±r.
- Osman Bey 1258 yΔ±lΔ±nda SΓΆΔΓΌtβte doΔdu. Osman Bey 1 AΔustos 1326βda Bursaβda hayatΔ±nΔ±
kaybetmiΕtir.1281 yΔ±lΔ±nda Osman Bey 23 yaΕΔ±nda iken Ahi teΕkilatΔ±ndan olan Εeyh
Edebaliβnin kΔ±zΔ± Malhun Hatun ile evlendi.Bu evlilikten daha sonra OsmanlΔ± Devletiβnin
baΕΔ±na geΓ§ecek olan Orhan Gazi doΔdu.1281 yΔ±lΔ±nda Osman Beyin babasΔ± ErtuΔrul
Bey 90 yaΕΔ±nda vefat etmiΕtir.1326βda Osman Bey, BursaβyΔ± kuΕattΔ±. Fakat Osman
beyin rahatsΔ±zlanmasΔ± ΓΌzerine kuΕatmaya Orhan Bey devam etti. Bursa alΔ±ndΔ±ktan
sonra baΕkent yapΔ±lmΔ±ΕtΔ±r.Osman Gazi son yΔ±llarΔ±nda yaΕΔ±nΔ±n ilerlemesi ve gut
hastalΔ±ΔΔ± yΓΌzΓΌnden beylik idaresini oΔlu olan Orhan Bey'e bΔ±rakmΔ±ΕtΔ±.OsmanlΔ± BeyliΔinin
ilk fethettiΔi ada Δ°mralΔ± AdasΔ±dΔ±r. Δ°mralΔ± AdasΔ± 1308 yΔ±lΔ±nda Osman Bey tarafΔ±ndan
alΔ±nmΔ±ΕtΔ±r.Δ°lk OsmanlΔ± parasΔ± Osman Bey tarafΔ±ndan bakΔ±r olarak akΓ§e adΔ± ile 1324
yΔ±lΔ±nda bastΔ±rΔ±lmΔ±ΕtΔ±r.OsmanlΔ± BeyliΔinin ilk baΕkenti SΓΆΔΓΌttΓΌr.OsmanlΔ± tarihinde
ilk savaΕ, 1284 yΔ±lΔ±nda Bizans tekfurlarΔ±yla yapΔ±lan Ermeni Beli savaΕΔ±dΔ±r.Osman
Beyin ele geΓ§irdiΔi ilk kale 1285 yΔ±lΔ±nda fethedilen Kolca Hisar Kalesiβdir.OsmanlΔ±
beyliΔinin ilk kadΔ±sΔ± Osman Bey dΓΆneminde atanan Dursun Fakihβtir.Osman Bey 1288
yΔ±lΔ±nda KaracahisarΔ± fethetti. Osman Bey 1299 yΔ±lΔ±nda Bilecik'i fethetti.Osman
Gazi, babasΔ± ErtuΔrul Gazi'den yaklaΕΔ±k 4.800 kilometrekare olarak devraldΔ±ΔΔ±
OsmanlΔ± topraΔΔ±nΔ± oΔlu Orhan Gazi'ye 16.000 kilometrekare olarak devretmiΕtir.Osman
Bey'in vefatΔ± sonrasΔ± yerine Orhan Bey geΓ§ti.
- source_sentence: Tunakabuni'nin Γ§alΔ±ΕmalarΔ± ne konudadΔ±r?
sentences:
- Tunakabuni Γ§eΕitli tΔ±bbi ve dini konularda yazarlΔ±k yaptΔ±. O Arap ve Hint kaynaklarΔ±na
gΓΆre , 1679 yΔ±lΔ±nda basit ilaΓ§lar ve tΔ±bbi aletlerle ilgili Γ§alΔ±Εmalar yapmΔ±ΕtΔ±r.
O dΓΆnem, 1666-1694 yΔ±llarΔ±nda Δ°ran hΓΌkΓΌmdarΔ± SΓΌleyman Εah tarafΔ±ndan ona ithaf
edilmiΕtir.
- Tunakabuni'nin Γ§alΔ±ΕmalarΔ± tΔ±bbi ve dini konulardadΔ±r.
- Metinde verilen bilgiye gΓΆre, 2012-13 yΔ±lΔ±nda kamu harcamalarΔ± 28 milyon Β£ olarak
belirlenmiΕtir.
- source_sentence: Tibet mimarisi hangi iki kΓΌltΓΌrΓΌ yansΔ±tΔ±r?
sentences:
- 'Metinde belirtilenlere gΓΆre diΔer partilerin aldΔ±ΔΔ± oy oranlarΔ± aΕaΔΔ±daki gibidir:
- Quebec egemenlik yanlΔ±sΔ± Parti Quebecois (PQ): toplam oylarΔ±n %40.16''sΔ±nΔ± aldΔ±.
- Quebec Yeni Demokrat Partisi (NPDQ): toplam oylarΔ±n %1.22''sini aldΔ±.'
- Tibet mimarisi, Γin ve Hint kΓΌltΓΌrlerini yansΔ±tmaktadΔ±r.
- Tibet ekonomisi geΓ§im tarΔ±m hakimdir, ancak turizm son yΔ±llarda bΓΌyΓΌyen bir sanayi
haline gelmiΕtir. Tibet'te baskΔ±n din Tibet Budizm'dir; Buna ek olarak Tibet Budizm'e
benzer BΓΆn vardΔ±r ve Tibet MΓΌslΓΌmanlarΔ± ve HΔ±ristiyan azΔ±nlΔ±klar da vardΔ±r. Tibet
Budizmi, bΓΆlgenin sanat, mΓΌzik ve festivalleri ΓΌzerinde birincil bir etkidir.
Tibet mimarisi Γin ve Hint etkilerini yansΔ±tΔ±r. Tibet'teki zΔ±mba gΔ±dalarΔ± kavrulmuΕ
arpa, yak eti ve tereyaΔΔ± Γ§ayΔ±dΔ±r.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# intfloat-fine-tuned
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision c9e87c786ffac96aeaeb42863276930883923ecb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("Omerhan/checkpoint-60-ucsahin")
# Run inference
sentences = [
'Tibet mimarisi hangi iki kΓΌltΓΌrΓΌ yansΔ±tΔ±r?',
'Tibet mimarisi, Γin ve Hint kΓΌltΓΌrlerini yansΔ±tmaktadΔ±r.',
"Tibet ekonomisi geΓ§im tarΔ±m hakimdir, ancak turizm son yΔ±llarda bΓΌyΓΌyen bir sanayi haline gelmiΕtir. Tibet'te baskΔ±n din Tibet Budizm'dir; Buna ek olarak Tibet Budizm'e benzer BΓΆn vardΔ±r ve Tibet MΓΌslΓΌmanlarΔ± ve HΔ±ristiyan azΔ±nlΔ±klar da vardΔ±r. Tibet Budizmi, bΓΆlgenin sanat, mΓΌzik ve festivalleri ΓΌzerinde birincil bir etkidir. Tibet mimarisi Γin ve Hint etkilerini yansΔ±tΔ±r. Tibet'teki zΔ±mba gΔ±dalarΔ± kavrulmuΕ arpa, yak eti ve tereyaΔΔ± Γ§ayΔ±dΔ±r.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 4,997 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 16.36 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 33.39 tokens</li><li>max: 265 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 197.11 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Εehzade Selim kiminle akrabaydΔ±?</code> | <code>Εehzade Selim, Dulkadir Beyi AlaΓΌddevle Bozkurt Bey ile anne tarafΔ±ndan akrabaydΔ±.</code> | <code>Safevi Εah'Δ± Δ°smail 1507 yΔ±lΔ±nda hem Δ°stanbul'un hem de Kahire'nin gΓΆstereceΔi tepkiyi gΓΆrmek amacΔ±yla DulkadiroΔullarΔ± BeyliΔi'nin ΓΌzerine yΓΌrΓΌdΓΌ. AsΔ±l sebebi bu olmamakla beraber gΓΆrΓΌnΓΌΕteki sebep, Dulkadir Beyi AlaΓΌddevle Bozkurt Bey'in Εii olan Εah'a kΔ±zΔ±nΔ± vermek istememesiydi. Εah Δ°smail OsmanlΔ± topraklarΔ±ndan geΓ§erek Kayseri ΓΌzerinden Dulkadir topraklarΔ±na girdi.SavaΕta yenilen AlaΓΌddevle Bozkurt Bey kaΓ§tΔ± ve Εah Δ°smail Bey'in bir oΔlu ile iki torununu ele geΓ§irerek ΓΆldΓΌrttΓΌ. Bunun ΓΌzerine MaraΕ'a ve Elbistan'a giren Εah Δ°smail Dulkadir HanedanΔ±'nΔ±n mezarlarΔ±nΔ± yaktΔ±rdΔ±. Sonradan da OsmanlΔ± Devleti'ne bir mektup yazΔ±p topraklarΔ±nΔ± Γ§iΔnediΔinden dolayΔ± da ΓΆzΓΌr diledi. YΔ±llardan beri DulkadiroΔullarΔ± BeyliΔi'nin kendilerine baΔlΔ± olduΔunu iddia eden Memluklular ve OsmanlΔ±lar bu hareketi cevapsΔ±z bΔ±raktΔ±lar.Bu da Εah Δ°smail'in Anadolu'daki prestijini artΔ±rdΔ±. Memluklular tamamΔ±yla sessiz kalsa da OsmanlΔ±larΔ±n sessiz kalmalarΔ± mΓΌmkΓΌn deΔildi.Zira Trabzon sancak beyi Εehzade Selim, anne tarafΔ±ndan Dulkadir Beyi AlaΓΌddevle Bozkurt Bey ile akrabaydΔ±.Εehzade Selim ve Εehzade Korkut AlaΓΌddevle Bozkurt Bey'in kΔ±zΔ± olan aynΔ± anneden dΓΌnyaya gelmiΕti. Bir dayΔ±sΔ±na ve iki dayΔ± oΔluna yapΔ±lan bu harekete karΕΔ± Εehzade Selim Azerbaycan'a kadar Safevi topraklarΔ±na girerek Safevi HanedanΔ±'na mensup bazΔ± kiΕileri esir alΔ±p Trabzon'a getirerek dayΔ±sΔ±na yapΔ±lanΔ±n intikamΔ±nΔ± aldΔ±. BabasΔ± Bayezid bile hiΓ§bir Εey yapmamΔ±Εken Εehzade Selim' in bu hareketi gΓΆzlerin ona Γ§evrilmesine neden oldu. Bu arada II.Bayezid Εah Δ°smail'in herhangi bir seferine karΕΔ± Orta Anadolu'ya asker yΔ±ΔdΔ±.Bu nedenle Εah Δ°smail Anadolu'nun iΓ§lerine girmekten Γ§ekinmiΕtir. SayΔ±sΔ± 115 bini bulan bu orduyu gΓΆzΓΌne kestiremeyen Εah, II. Bayezid'e ΕanlΔ± bΓΌyΓΌk babam diye hitap ettiΔi bir mektup yazarak 1508 yΔ±llarΔ±nΔ±n ilk aylarΔ±nda DiyarbakΔ±r'a Γ§ekildi.</code> |
| <code>Δ°ngilizler hangi yΔ±lda DerviΕeleri yendi?</code> | <code>Δ°ngilizler, DerviΕler'i 1920 yΔ±lΔ±nda yendi.</code> | <code>19. yΓΌzyΔ±lΔ±n sonlarΔ±nda, Berlin konferansΔ± sona erdikten sonra AvrupalΔ± imparatorluklar ordularΔ±yla Afrika Boynuzu'na yelken aΓ§tΔ±lar. Somali ΓΌzerinde titreyen imparatorluk bulutlarΔ±, Afrika Boynuzu'ndan Somali askerlerini bir araya getiren ve Εimdiye kadarki en uzun sΓΆmΓΌrge karΕΔ±tΔ± savaΕlardan birini baΕlatan DerviΕ lideri Muhammed Abdullah Hassan'Δ± alarma geΓ§irdi. DerviΕ Devleti Δ°ngiliz imparatorluΔunu dΓΆrt kez baΕarΔ±yla pΓΌskΓΌrttΓΌ ve kΔ±yΔ± bΓΆlgesine geri Γ§ekilmeye zorladΔ±. DerviΕ Devleti Δ°ngilizlere karΕΔ± baΕarΔ±larΔ±nΔ±n bir sonucu olarak OsmanlΔ± ve Alman imparatorluklarΔ±ndan destek aldΔ±. TΓΌrkler Somali ulusundan Hasan Emir'i de seΓ§tiler ve Almanlar DerviΕlerin elde edeceΔi her bΓΆlgeyi resmen tanΔ±maya sΓΆz verdiler. Γeyrek asΔ±rlΔ±k Δ°ngilizleri kΓΆrfezde tuttuktan sonra, DerviΕler sonunda 1920'de yenildi, Δ°ngiltere'nin Afrika'da ilk kez DerviΕ baΕkenti Taleex'i bombalamak iΓ§in uΓ§aklarΔ± kullandΔ±. Bu bombardΔ±man sonucunda eski DerviΕ topraklarΔ± Britanya'nΔ±n himayesine dΓΆnΓΌΕtΓΌ. Δ°talya benzer Εekilde Somali SultanlarΔ± ve ordulardan aynΔ± muhalefetle karΕΔ± karΕΔ±ya kaldΔ± ve 1927'nin sonlarΔ±nda FaΕist dΓΆneme kadar modern Somali'nin parΓ§alarΔ±nΔ±n tam kontrolΓΌnΓΌ elde edemedi. Bu iΕgal 1941 yΔ±lΔ±na kadar sΓΌrdΓΌ ve yerini Δ°ngiliz askeri idaresi aldΔ±.</code> |
| <code>βpost-punkβ terimini ilk kullanan kimdi?</code> | <code>Metinde belirtilen bilgilere gΓΆre, "post-punk" terimini ilk kullananlarΔ±n gazeteciler olduΔu belirtilmiΕtir. Ancak metinde terimin ilk kullanΔ±mΔ±nΔ± yapan gazetecinin kim olduΔu belirtilmemiΕtir.</code> | <code>βpost-punkβ terimi ilk olarak 1970'lerin sonlarΔ±nda gazeteciler tarafΔ±ndan punk'Δ±n sonik Εablonunun ΓΆtesine geΓ§en gruplarΔ± farklΔ± bΓΆlgelere tanΔ±mlamak iΓ§in kullanΔ±ldΔ±. BaΕlangΔ±Γ§ta punk'Δ±n DIY etiΔi ve enerjisinden esinlenen bu sanatΓ§Δ±larΔ±n Γ§oΔu, sonuΓ§ta stil ve hareketle hayal kΔ±rΔ±klΔ±ΔΔ±na uΔradΔ± ve ticari formΓΌle, rock kongresi ve ΓΆz parodisine dΓΌΕtΓΌΔΓΌnΓΌ hissetti. PopΓΌlist iddialarΔ±nΔ± eriΕilebilirlik ve ham basitliΔe karΕΔ± reddettiler, bunun yerine mΓΌzikal geleneΔi kΔ±rma, sΔ±radan yerleri alt etme ve izleyicilere meydan okuma fΔ±rsatΔ± gΓΆrdΓΌler. SanatΓ§Δ±lar bΓΌyΓΌk ΓΆlΓ§ΓΌde beyaz kaygΔ±larΔ± ΓΌzerinde punk odak ΓΆtesine taΕΔ±ndΔ±, erkek, iΕΓ§i sΔ±nΔ±fΔ± nΓΌfus ve kurulan rock and roll tropes onun sΓΌrekli gΓΌven terk, BΓΆyle ΓΌΓ§ akor ilerlemeler ve Chuck Berry tabanlΔ± gitar riffs gibi. Bu sanatΓ§Δ±lar bunun yerine βradikal iΓ§eriΔin radikal bir form gerektirdiΔineβ inanarak punk'Δ± βsΓΌrekli deΔiΕimin bir zorunluluΔuβ olarak tanΔ±mladΔ±lar.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `gradient_accumulation_steps`: 8
- `learning_rate`: 1e-06
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.01
- `tf32`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
lesso/4bf06a74-5c35-477c-ba35-149b000de619
|
lesso
| 2025-02-04T04:28:32Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:57:38Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4bf06a74-5c35-477c-ba35-149b000de619
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 05d23f8c0d4d9d78_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/05d23f8c0d4d9d78_train_data.json
type:
field_input: ''
field_instruction: ctx
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/4bf06a74-5c35-477c-ba35-149b000de619
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001015
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 150
micro_batch_size: 2
mlflow_experiment_name: /tmp/G.O.D/05d23f8c0d4d9d78_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
wandb_project: ab-god15
wandb_run: your_name
wandb_runid: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4bf06a74-5c35-477c-ba35-149b000de619
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001015
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0022 | 50 | nan |
| 0.0 | 0.0044 | 100 | nan |
| 0.0 | 0.0067 | 150 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ewhk9887/merged-deepseek-r1-with-python
|
ewhk9887
| 2025-02-04T04:28:20Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-03T12:08:46Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
license: mit
datasets:
- iamtarun/python_code_instructions_18k_alpaca
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
---
# Model Card for Finetuned DeepSeek-R1 Code Review Model
## Model Details / λͺ¨λΈ μΈλΆ μ 보
### Model Description / λͺ¨λΈ μ€λͺ
**English:**
This model is a finetuned version of the DeepSeek-R1 Distill Llama model, adapted for performing code reviews in Korean. It has been fine-tuned using QLoRA and additional dataset transformations from the [iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca) dataset, converting code generation prompts into code review prompts. The LoRA adapters have been merged into the base model to produce a self-contained model that can be deployed directly.
**νκ΅μ΄:**
μ΄ λͺ¨λΈμ DeepSeek-R1 Distill Llama λͺ¨λΈμ κΈ°λ°μΌλ‘, νκ΅μ΄ μ½λ 리뷰 μμ
μ λ§κ² νμΈνλλ λͺ¨λΈμ
λλ€. [iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca) λ°μ΄ν°μ
μ μ½λ μμ± ν둬ννΈλ₯Ό μ½λ 리뷰 ν둬ννΈλ‘ λ³ννμ¬ QLoRA κΈ°λ²μ μ¬μ©ν΄ νμΈνλνμμΌλ©°, LoRA μ΄λν°λ₯Ό λ² μ΄μ€ λͺ¨λΈμ λ³ν©ν΄ self-contained ννλ‘ μ μλμμ΅λλ€.
- **Developed by / κ°λ°μ:** [More Information Needed / μΆκ° μ 보 νμ]
- **Model type / λͺ¨λΈ μ ν:** Causal Language Model with Finetuning for Code Review
- **Language(s) / μ¬μ© μΈμ΄:** Korean, English
- **License / λΌμ΄μΌμ€:** MIT
- **Base Model / λ² μ΄μ€ λͺ¨λΈ:** [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
### Model Sources / λͺ¨λΈ μμ€
- **Repository / 리ν¬μ§ν 리:** [More Information Needed / μΆκ° μ 보 νμ]
- **Paper / λ
Όλ¬Έ (optional):** [More Information Needed / μΆκ° μ 보 νμ]
- **Demo / λ°λͺ¨ (optional):** [More Information Needed / μΆκ° μ 보 νμ]
---
## Uses / μ¬μ© μ©λ
### Direct Use / μ§μ μ¬μ©
**English:**
This model is intended for generating code reviews for Python code. It is designed to provide feedback on code quality, style, and possible improvements.
It is designed as prototype for programming education.
**νκ΅μ΄:**
μ΄ λͺ¨λΈμ Python μ½λλ₯Ό λμμΌλ‘ μ½λ 리뷰(νΌλλ°±, μ€νμΌ κ°μ λ±)λ₯Ό μμ±νκΈ° μν΄ κ°λ°λμμ΅λλ€.
νλ‘κ·Έλλ° κ΅μ‘μ μν λͺ¨λΈμ νλ‘ν νμ
μΌλ‘ κ°λ°λμμ΅λλ€.
### Downstream Use / λ€μ΄μ€νΈλ¦Ό μ¬μ© (optional)
**English:**
It can be integrated into developer tools, code analysis platforms, or educational environments to assist in code review tasks.
**νκ΅μ΄:**
κ°λ°μ λꡬ, μ½λ λΆμ νλ«νΌ λλ κ΅μ‘ νκ²½μ ν΅ν©λμ΄ μ½λ 리뷰 μμ
μ 보쑰νλ μ©λλ‘ νμ©λ μ μμ΅λλ€.
### Out-of-Scope Use / μ§μνμ§ μλ μ¬μ© μμ
**English:**
This model is not optimized for generating full code, handling languages other than Python, or for use in critical production environments without human oversight.
**νκ΅μ΄:**
μ΄ λͺ¨λΈμ μ½λ μμ±μ μν λͺ¨λΈμ΄ μλλ©°, νμ¬ λ°μ΄ν°μ
μ Python μ΄μΈμ μΈμ΄μ λν΄μλ μ΅μ νλμ΄ μμ§ μμ΅λλ€.
μ΄ν νκ΅μ΄ μ½λ 리뷰 λ°μ΄ν°μ
κ³Ό Goμ Rustλ₯Ό ν¬ν¨νλ λͺ¨λΈμ μΆνμ μ
λ‘λλ μμ μ
λλ€.
---
## Bias, Risks, and Limitations / νΈν₯, μν λ° νκ³
**English:**
- The model has been trained on data that may have inherent biases, and its reviews are generated automatically.
- This model is not perfectly optimized for Korean language code review.
**νκ΅μ΄:**
- λͺ¨λΈμ΄ μμ±ν 리뷰μλ νΈν₯μ΄ μμ μ μμ΅λλ€.
- νκ΅μ΄ μ½λ 리뷰μ μλ²½νκ² μ΅μ νλμ΄μμ§ μμ μ μμ΅λλ€.
## How to get started with model / λͺ¨λΈ μμνκΈ°
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "your_hf_username/merged-deepseek-r1-codereview"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = """μλλ μμ±λ Python μ½λμ
λλ€.
μ½λμ μ₯λ¨μ , κ°μ μ¬ν, μ½λ μ€νμΌ λ±μ λν΄ 3~4μ€ μ λμ κ°κ²°ν 리뷰λ₯Ό μμ±νμΈμ.
# μ½λ:
### Python μ½λ
def sum_sequence(sequence):
sum = 0
for num in sequence:
sum += num
return sum
### μ½λ 리뷰:"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Details / νμ΅ μ 보
### Training Data / νμ΅ λ°μ΄ν°
English:
The model was fine-tuned using the iamtarun/python_code_instructions_18k_alpaca dataset. The original code generation prompts were transformed into code review prompts to suit the task.
νκ΅μ΄:
μ΄ λͺ¨λΈμ iamtarun/python_code_instructions_18k_alpaca λ°μ΄ν°μ
μ μ¬μ©ν΄ νμΈνλλμμ΅λλ€. κΈ°μ‘΄μ μ½λ μμ± ν둬ννΈλ₯Ό μ½λ 리뷰 ν둬ννΈλ‘ λ³ννμ¬ νμ΅μ μ¬μ©νμμ΅λλ€.
Training Procedure / νμ΅ μ μ°¨
English:
Preprocessing: The dataset was preprocessed to convert the code generation prompts into a standardized code review format.
Fine-tuning: The base model was fine-tuned using QLoRA with 4-bit quantization for efficiency. LoRA adapters were merged into the base model to produce a self-contained model.
νκ΅μ΄:
μ μ²λ¦¬: μ½λ μμ± ν둬ννΈλ₯Ό μ½λ 리뷰 νμμΌλ‘ λ³ννκΈ° μν΄ λ°μ΄ν°μ
μ μ μ²λ¦¬νμμ΅λλ€.
νμΈνλ: ν¨μ¨μ±μ μν΄ 4λΉνΈ μμνλ₯Ό μ¬μ©νμ¬ QLoRA κΈ°λ²μΌλ‘ λ² μ΄μ€ λͺ¨λΈμ νμΈνλνμμΌλ©°, LoRA μ΄λν°λ₯Ό λ³ν©νμ¬ self-contained λͺ¨λΈλ‘ μ μνμμ΅λλ€.
|
onekq-ai/s1-32B-bnb-4bit
|
onekq-ai
| 2025-02-04T04:27:25Z | 41 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"base_model:simplescaling/s1-32B",
"base_model:quantized:simplescaling/s1-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-04T03:46:39Z |
---
library_name: transformers
license: apache-2.0
base_model:
- simplescaling/s1-32B
---
Bitsandbytes quantization of https://huggingface.co/simplescaling/s1-32B.
See https://huggingface.co/blog/4bit-transformers-bitsandbytes for instructions.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import BitsAndBytesConfig
import torch
# Define the 4-bit configuration
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load the pre-trained model with the 4-bit quantization configuration
model = AutoModelForCausalLM.from_pretrained("simplescaling/s1-32B", quantization_config=nf4_config)
# Load the tokenizer associated with the model
tokenizer = AutoTokenizer.from_pretrained("simplescaling/s1-32B")
# Push the model and tokenizer to the Hugging Face hub
model.push_to_hub("onekq-ai/s1-32B-bnb-4bit", use_auth_token=True)
tokenizer.push_to_hub("onekq-ai/s1-32B-bnb-4bit", use_auth_token=True)
```
|
guilxus/72158a60-a7eb-4991-8e08-5a4b1c5908b0
|
guilxus
| 2025-02-04T04:24:34Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T04:20:11Z |
---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 72158a60-a7eb-4991-8e08-5a4b1c5908b0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 28cfef58c079ae09_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/28cfef58c079ae09_train_data.json
type:
field_input: comment
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: guilxus/72158a60-a7eb-4991-8e08-5a4b1c5908b0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/28cfef58c079ae09_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: cae3f697-637d-476e-acf5-7861ed8393e4
wandb_project: Gradients-On-11
wandb_run: your_name
wandb_runid: cae3f697-637d-476e-acf5-7861ed8393e4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 72158a60-a7eb-4991-8e08-5a4b1c5908b0
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.0148 | 0.1427 | 200 | 1.2631 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF
|
mradermacher
| 2025-02-04T04:23:54Z | 426 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Aleteian/Legend-of-the-Four-Winds-MN-12B",
"base_model:quantized:Aleteian/Legend-of-the-Four-Winds-MN-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T02:16:20Z |
---
base_model: Aleteian/Legend-of-the-Four-Winds-MN-12B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Aleteian/Legend-of-the-Four-Winds-MN-12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Legend-of-the-Four-Winds-MN-12B-GGUF/resolve/main/Legend-of-the-Four-Winds-MN-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
liusq19/Qwen2.5-1.5B-Open-R1-Distill
|
liusq19
| 2025-02-04T04:21:24Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/Bespoke-Stratos-17k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-02T08:38:37Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: HuggingFaceH4/Bespoke-Stratos-17k
library_name: transformers
model_name: Qwen/Qwen2.5-1.5B-Instruct
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen/Qwen2.5-1.5B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [HuggingFaceH4/Bespoke-Stratos-17k](https://huggingface.co/datasets/HuggingFaceH4/Bespoke-Stratos-17k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="liusq19/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shiqi_1/huggingface/runs/ak49qf9r)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LJHjonghyeon/new_data
|
LJHjonghyeon
| 2025-02-04T04:19:32Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T04:17:18Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LJHjonghyeon
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
earnxus/86ba5b4b-5c76-4dc9-80b0-6076ede86846
|
earnxus
| 2025-02-04T04:17:43Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T03:51:33Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 86ba5b4b-5c76-4dc9-80b0-6076ede86846
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ceac57436127cc6c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ceac57436127cc6c_train_data.json
type:
field_input: ''
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/86ba5b4b-5c76-4dc9-80b0-6076ede86846
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ceac57436127cc6c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: cc891c4e-9b2c-4c32-93f8-b418eb54f13f
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: cc891c4e-9b2c-4c32-93f8-b418eb54f13f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 86ba5b4b-5c76-4dc9-80b0-6076ede86846
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7016 | 0.6015 | 200 | 1.8011 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
philip-hightech/c4b1277f-6ee4-4273-8b91-1da9c4d5cef8
|
philip-hightech
| 2025-02-04T04:17:37Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T04:17:01Z |
---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c4b1277f-6ee4-4273-8b91-1da9c4d5cef8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 433b1171462ef288_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/433b1171462ef288_train_data.json
type:
field_input: critic_prompt
field_instruction: init_prompt
field_output: init_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/c4b1277f-6ee4-4273-8b91-1da9c4d5cef8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 2
mlflow_experiment_name: /tmp/433b1171462ef288_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8608c2ec-a087-435a-9278-1ca3f3049fce
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8608c2ec-a087-435a-9278-1ca3f3049fce
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c4b1277f-6ee4-4273-8b91-1da9c4d5cef8
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 6.9360 |
| 6.8752 | 0.0059 | 63 | 6.8734 |
| 6.8601 | 0.0119 | 126 | 6.8584 |
| 6.8542 | 0.0178 | 189 | 6.8534 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
eddysang/023d75df-5795-42ab-a038-ec4ad11f3c1b
|
eddysang
| 2025-02-04T04:17:27Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-64k",
"region:us"
] | null | 2025-02-04T03:29:34Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 023d75df-5795-42ab-a038-ec4ad11f3c1b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-64k
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9702554f26460ac5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9702554f26460ac5_train_data.json
type:
field_input: ingredients
field_instruction: method
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: eddysang/023d75df-5795-42ab-a038-ec4ad11f3c1b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00015
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/9702554f26460ac5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: yaudayah0
wandb_mode: online
wandb_name: 90e1caee-8148-4b00-a510-c0c50a07f653
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 90e1caee-8148-4b00-a510-c0c50a07f653
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# 023d75df-5795-42ab-a038-ec4ad11f3c1b
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0052 | 1 | 2.1776 |
| 40.2369 | 0.0466 | 9 | 1.0181 |
| 30.8358 | 0.0932 | 18 | 0.9361 |
| 30.6756 | 0.1398 | 27 | 0.8983 |
| 27.4575 | 0.1864 | 36 | 0.8813 |
| 27.1535 | 0.2330 | 45 | 0.8693 |
| 26.7588 | 0.2796 | 54 | 0.8487 |
| 28.8767 | 0.3262 | 63 | 0.8452 |
| 24.3021 | 0.3728 | 72 | 0.8401 |
| 26.5022 | 0.4193 | 81 | 0.8342 |
| 25.2517 | 0.4659 | 90 | 0.8279 |
| 28.4574 | 0.5125 | 99 | 0.8274 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
shibajustfor/80277472-4439-4ae5-8a8e-a6d9ab524841
|
shibajustfor
| 2025-02-04T04:16:54Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T04:16:11Z |
---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 80277472-4439-4ae5-8a8e-a6d9ab524841
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 433b1171462ef288_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/433b1171462ef288_train_data.json
type:
field_input: critic_prompt
field_instruction: init_prompt
field_output: init_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/80277472-4439-4ae5-8a8e-a6d9ab524841
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/433b1171462ef288_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8608c2ec-a087-435a-9278-1ca3f3049fce
wandb_project: Birthday-SN56-39-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8608c2ec-a087-435a-9278-1ca3f3049fce
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 80277472-4439-4ae5-8a8e-a6d9ab524841
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 6.9360 |
| 6.9234 | 0.0094 | 50 | 6.9208 |
| 6.8959 | 0.0189 | 100 | 6.8939 |
| 6.891 | 0.0283 | 150 | 6.8911 |
| 6.8895 | 0.0377 | 200 | 6.8908 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso/88e9389b-0ecf-4bb2-87a1-51509d7562ad
|
lesso
| 2025-02-04T04:16:08Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T04:10:24Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 88e9389b-0ecf-4bb2-87a1-51509d7562ad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 05d23f8c0d4d9d78_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/05d23f8c0d4d9d78_train_data.json
type:
field_input: ''
field_instruction: ctx
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/88e9389b-0ecf-4bb2-87a1-51509d7562ad
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god13/05d23f8c0d4d9d78_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
wandb_project: ab-god13
wandb_run: your_name
wandb_runid: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 88e9389b-0ecf-4bb2-87a1-51509d7562ad
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5119 | 0.0028 | 1 | 2.5158 |
| 1.378 | 0.1423 | 50 | 1.5304 |
| 1.2774 | 0.2847 | 100 | 1.4797 |
| 1.2644 | 0.4270 | 150 | 1.4555 |
| 1.2039 | 0.5694 | 200 | 1.4486 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mradermacher/Zurich-1.5B-GCv2-5m-GGUF
|
mradermacher
| 2025-02-04T04:14:44Z | 291 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"gammacorpus",
"zurich",
"chat",
"conversational",
"en",
"dataset:rubenroy/GammaCorpus-v2-5m",
"base_model:rubenroy/Zurich-1.5B-GCv2-5m",
"base_model:quantized:rubenroy/Zurich-1.5B-GCv2-5m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-03T21:09:13Z |
---
base_model: rubenroy/Zurich-1.5B-GCv2-5m
datasets:
- rubenroy/GammaCorpus-v2-5m
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- gammacorpus
- zurich
- chat
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rubenroy/Zurich-1.5B-GCv2-5m
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Zurich-1.5B-GCv2-5m-GGUF/resolve/main/Zurich-1.5B-GCv2-5m.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
abaddon182/6a2dfe2e-2f7a-4197-bbe4-e7c8541ccf6d
|
abaddon182
| 2025-02-04T04:13:15Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:08:51Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6a2dfe2e-2f7a-4197-bbe4-e7c8541ccf6d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5210a65ef5106af6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5210a65ef5106af6_train_data.json
type:
field_instruction: caption
field_output: matching_score
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: abaddon182/6a2dfe2e-2f7a-4197-bbe4-e7c8541ccf6d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/5210a65ef5106af6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6a2dfe2e-2f7a-4197-bbe4-e7c8541ccf6d
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.061 | 0.0000 | 1 | 2.1100 |
| 0.4251 | 0.0021 | 50 | 0.4079 |
| 0.385 | 0.0042 | 100 | 0.3764 |
| 0.3987 | 0.0063 | 150 | 0.3666 |
| 0.3883 | 0.0084 | 200 | 0.3646 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dixedus/a06ac1c1-5c5e-4061-9e43-78b852a195f1
|
dixedus
| 2025-02-04T04:12:44Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:57:16Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a06ac1c1-5c5e-4061-9e43-78b852a195f1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 05d23f8c0d4d9d78_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/05d23f8c0d4d9d78_train_data.json
type:
field_input: ''
field_instruction: ctx
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dixedus/a06ac1c1-5c5e-4061-9e43-78b852a195f1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/05d23f8c0d4d9d78_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
wandb_project: Gradients-On-Eight
wandb_run: your_name
wandb_runid: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a06ac1c1-5c5e-4061-9e43-78b852a195f1
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 2.5089 |
| 2.5046 | 0.0032 | 9 | 2.4501 |
| 2.1437 | 0.0064 | 18 | 2.0470 |
| 1.875 | 0.0096 | 27 | 1.8732 |
| 1.8341 | 0.0128 | 36 | 1.7991 |
| 1.7549 | 0.0160 | 45 | 1.7380 |
| 1.7462 | 0.0192 | 54 | 1.6864 |
| 1.7131 | 0.0224 | 63 | 1.6484 |
| 1.6675 | 0.0256 | 72 | 1.6276 |
| 1.5945 | 0.0288 | 81 | 1.6179 |
| 1.5882 | 0.0320 | 90 | 1.6142 |
| 1.6643 | 0.0352 | 99 | 1.6137 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
leixa/d8c96614-966b-4b85-aa28-59fc6d5ca053
|
leixa
| 2025-02-04T04:11:56Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:56:19Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d8c96614-966b-4b85-aa28-59fc6d5ca053
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 05d23f8c0d4d9d78_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/05d23f8c0d4d9d78_train_data.json
type:
field_input: ''
field_instruction: ctx
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: leixa/d8c96614-966b-4b85-aa28-59fc6d5ca053
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/05d23f8c0d4d9d78_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d8c96614-966b-4b85-aa28-59fc6d5ca053
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 2.5089 |
| 2.5041 | 0.0032 | 9 | 2.4506 |
| 2.1417 | 0.0064 | 18 | 2.0448 |
| 1.8709 | 0.0096 | 27 | 1.8706 |
| 1.832 | 0.0128 | 36 | 1.7949 |
| 1.75 | 0.0160 | 45 | 1.7313 |
| 1.742 | 0.0192 | 54 | 1.6799 |
| 1.7067 | 0.0224 | 63 | 1.6430 |
| 1.6618 | 0.0256 | 72 | 1.6240 |
| 1.5908 | 0.0288 | 81 | 1.6152 |
| 1.5858 | 0.0320 | 90 | 1.6122 |
| 1.6621 | 0.0352 | 99 | 1.6114 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
prxy5604/518ebd42-b621-46da-bded-ef9115b9090b
|
prxy5604
| 2025-02-04T04:11:22Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"dbrx",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-dbrx",
"base_model:adapter:katuni4ka/tiny-random-dbrx",
"region:us"
] | null | 2025-02-04T04:10:11Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-dbrx
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 518ebd42-b621-46da-bded-ef9115b9090b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-dbrx
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a69c39734819523a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a69c39734819523a_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/518ebd42-b621-46da-bded-ef9115b9090b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/a69c39734819523a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f200e9b6-0cd3-4ed2-a65a-2d405aaa2695
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f200e9b6-0cd3-4ed2-a65a-2d405aaa2695
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 518ebd42-b621-46da-bded-ef9115b9090b
This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 46.0 | 0.0038 | 1 | 11.5 |
| 46.0 | 0.1892 | 50 | 11.5 |
| 46.0 | 0.3784 | 100 | 11.5 |
| 46.0 | 0.5676 | 150 | 11.5 |
| 46.0 | 0.7569 | 200 | 11.5 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
tryingpro/e95eee45-f2d5-4766-be8a-98987eeb0598
|
tryingpro
| 2025-02-04T04:10:04Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-02-04T03:30:26Z |
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e95eee45-f2d5-4766-be8a-98987eeb0598
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c2fee9c78f1574ee_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c2fee9c78f1574ee_train_data.json
type:
field_input: Description
field_instruction: Patient
field_output: Doctor
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: tryingpro/e95eee45-f2d5-4766-be8a-98987eeb0598
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- down_proj
- up_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 90
micro_batch_size: 2
mlflow_experiment_name: /tmp/c2fee9c78f1574ee_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: tryingpro-unicourt
wandb_mode: online
wandb_name: 059bd8ea-8d4e-42bf-ae21-2eb2b22407b3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 059bd8ea-8d4e-42bf-ae21-2eb2b22407b3
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# e95eee45-f2d5-4766-be8a-98987eeb0598
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 90
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0086 | 1 | nan |
| 0.0 | 0.4318 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso/8f5b6967-0d47-4a64-9461-01b2c0b5c656
|
lesso
| 2025-02-04T04:08:45Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T04:03:04Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8f5b6967-0d47-4a64-9461-01b2c0b5c656
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 05d23f8c0d4d9d78_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/05d23f8c0d4d9d78_train_data.json
type:
field_input: ''
field_instruction: ctx
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/8f5b6967-0d47-4a64-9461-01b2c0b5c656
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god07/05d23f8c0d4d9d78_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
wandb_project: ab-god07
wandb_run: your_name
wandb_runid: 88a1423e-4ec5-43b2-a3b0-ca7acaab4d17
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8f5b6967-0d47-4a64-9461-01b2c0b5c656
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5119 | 0.0028 | 1 | 2.5158 |
| 1.378 | 0.1423 | 50 | 1.5301 |
| 1.2778 | 0.2847 | 100 | 1.4797 |
| 1.2647 | 0.4270 | 150 | 1.4554 |
| 1.2033 | 0.5694 | 200 | 1.4484 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Best000/fd508986-b4df-4faa-8c71-4894fd0e5ad0
|
Best000
| 2025-02-04T04:07:14Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T04:00:26Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fd508986-b4df-4faa-8c71-4894fd0e5ad0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# fd508986-b4df-4faa-8c71-4894fd0e5ad0
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso/a10a1d6a-4175-41fd-ae01-c352af5a9b8d
|
lesso
| 2025-02-04T04:04:36Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:55:50Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a10a1d6a-4175-41fd-ae01-c352af5a9b8d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 7983b1695b7e7fb0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7983b1695b7e7fb0_train_data.json
type:
field_input: level
field_instruction: prompt
field_output: responses
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/a10a1d6a-4175-41fd-ae01-c352af5a9b8d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001017
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god17/7983b1695b7e7fb0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2d1a2331-2d93-4014-9474-321c82e2f1be
wandb_project: ab-god17
wandb_run: your_name
wandb_runid: 2d1a2331-2d93-4014-9474-321c82e2f1be
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a10a1d6a-4175-41fd-ae01-c352af5a9b8d
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001017
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1759 | 0.0024 | 1 | 1.3025 |
| 1.1813 | 0.1175 | 50 | 1.1481 |
| 1.7043 | 0.2350 | 100 | 1.1251 |
| 0.9229 | 0.3525 | 150 | 1.1125 |
| 1.0389 | 0.4700 | 200 | 1.1074 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nathanialhunt/781e2999-9c07-42b3-a123-d6edaabd58d5
|
nathanialhunt
| 2025-02-04T04:03:36Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:58:04Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 781e2999-9c07-42b3-a123-d6edaabd58d5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 781e2999-9c07-42b3-a123-d6edaabd58d5
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
oliveirabruno01/DeepSeek-R1-Distill-Qwen-1.5B-exec-SFT-seed
|
oliveirabruno01
| 2025-02-04T04:02:23Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-31T19:42:00Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhung03/cd930fdc-a984-445e-a988-7b92e8b3ccba
|
nhung03
| 2025-02-04T04:02:05Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-3B-Instruct",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T03:13:22Z |
---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cd930fdc-a984-445e-a988-7b92e8b3ccba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4f18f2f08d7cdc36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4f18f2f08d7cdc36_train_data.json
type:
field_input: selftext
field_instruction: title
field_output: answers.text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/cd930fdc-a984-445e-a988-7b92e8b3ccba
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4f18f2f08d7cdc36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 865affea-a5f6-4eda-ac40-6a4195f23efd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 865affea-a5f6-4eda-ac40-6a4195f23efd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cd930fdc-a984-445e-a988-7b92e8b3ccba
This model is a fine-tuned version of [unsloth/Qwen2.5-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3662 | 0.0128 | 200 | 2.3427 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
shibajustfor/31e5fb5f-cdcf-40c4-a0c4-99e739c5d967
|
shibajustfor
| 2025-02-04T03:56:44Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:52:13Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 31e5fb5f-cdcf-40c4-a0c4-99e739c5d967
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ceac57436127cc6c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ceac57436127cc6c_train_data.json
type:
field_input: ''
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/31e5fb5f-cdcf-40c4-a0c4-99e739c5d967
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ceac57436127cc6c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cc891c4e-9b2c-4c32-93f8-b418eb54f13f
wandb_project: Birthday-SN56-39-Gradients-On-Demand
wandb_run: your_name
wandb_runid: cc891c4e-9b2c-4c32-93f8-b418eb54f13f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 31e5fb5f-cdcf-40c4-a0c4-99e739c5d967
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0030 | 1 | 1.8250 |
| 1.7189 | 0.1504 | 50 | 1.7235 |
| 1.7386 | 0.3008 | 100 | 1.7143 |
| 1.6114 | 0.4511 | 150 | 1.7107 |
| 1.6668 | 0.6015 | 200 | 1.7101 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
brew35/b361d725-11ee-4819-9c49-7c15a39f4499
|
brew35
| 2025-02-04T03:55:19Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:59:21Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b361d725-11ee-4819-9c49-7c15a39f4499
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5210a65ef5106af6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5210a65ef5106af6_train_data.json
type:
field_instruction: caption
field_output: matching_score
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: brew35/b361d725-11ee-4819-9c49-7c15a39f4499
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/5210a65ef5106af6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b361d725-11ee-4819-9c49-7c15a39f4499
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4118 | 0.0042 | 200 | 0.4037 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
weilun007898/trial-toto1
|
weilun007898
| 2025-02-04T03:53:39Z | 25 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T03:52:07Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
lesso/0738a4ec-dd98-4e7a-8df0-0ce7dbd1db9f
|
lesso
| 2025-02-04T03:53:30Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:11:18Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0738a4ec-dd98-4e7a-8df0-0ce7dbd1db9f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 3e5eab4715297236_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3e5eab4715297236_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/0738a4ec-dd98-4e7a-8df0-0ce7dbd1db9f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001017
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god17/3e5eab4715297236_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
wandb_project: ab-god17
wandb_run: your_name
wandb_runid: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0738a4ec-dd98-4e7a-8df0-0ce7dbd1db9f
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001017
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5118 | 0.0009 | 1 | 0.3299 |
| 0.6496 | 0.0462 | 50 | 0.2352 |
| 0.4482 | 0.0925 | 100 | 0.2269 |
| 0.4205 | 0.1387 | 150 | 0.2219 |
| 0.4653 | 0.1849 | 200 | 0.2192 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
phongtintruong/meomeo-mhubert-vietbud-24-100
|
phongtintruong
| 2025-02-04T03:53:11Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"meomeo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T03:52:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ciloku/717b26ad-0cef-4fc0-8d24-a68b0ce0933b
|
ciloku
| 2025-02-04T03:52:38Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | 2025-02-04T03:50:02Z |
---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 717b26ad-0cef-4fc0-8d24-a68b0ce0933b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- 28cfef58c079ae09_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/28cfef58c079ae09_train_data.json
type:
field_input: comment
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: ciloku/717b26ad-0cef-4fc0-8d24-a68b0ce0933b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 6.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.04
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/28cfef58c079ae09_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cae3f697-637d-476e-acf5-7861ed8393e4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cae3f697-637d-476e-acf5-7861ed8393e4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 717b26ad-0cef-4fc0-8d24-a68b0ce0933b
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.0926 | 0.0029 | 1 | 1.6496 |
| 5.2871 | 0.1427 | 50 | 1.2253 |
| 4.5865 | 0.2853 | 100 | 1.1286 |
| 4.4735 | 0.4280 | 150 | 1.0896 |
| 4.836 | 0.5706 | 200 | 1.0813 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
utakumi/Hubert-kakeiken-W-closed_add_ver2
|
utakumi
| 2025-02-04T03:50:13Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"automatic-speech-recognition",
"original_kakeiken_W_closed_add_ver2",
"generated_from_trainer",
"base_model:rinna/japanese-hubert-base",
"base_model:finetune:rinna/japanese-hubert-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-03T19:49:50Z |
---
library_name: transformers
license: apache-2.0
base_model: rinna/japanese-hubert-base
tags:
- automatic-speech-recognition
- original_kakeiken_W_closed_add_ver2
- generated_from_trainer
metrics:
- wer
model-index:
- name: Hubert-kakeiken-W-closed_add_ver2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hubert-kakeiken-W-closed_add_ver2
This model is a fine-tuned version of [rinna/japanese-hubert-base](https://huggingface.co/rinna/japanese-hubert-base) on the ORIGINAL_KAKEIKEN_W_CLOSED_ADD_VER2 - JA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Wer: 0.9988
- Cer: 1.0129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 12500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|:------:|
| 28.4059 | 1.0 | 880 | 10.6721 | 1.0 | 1.1284 |
| 9.1792 | 2.0 | 1760 | 6.9924 | 1.0 | 1.1284 |
| 4.9143 | 3.0 | 2640 | 3.8166 | 1.0 | 1.1284 |
| 3.1394 | 4.0 | 3520 | 2.8829 | 1.0 | 1.1283 |
| 2.7266 | 5.0 | 4400 | 1.9608 | 1.0 | 1.1444 |
| 1.4314 | 6.0 | 5280 | 0.8434 | 0.9999 | 1.0662 |
| 0.6837 | 7.0 | 6160 | 0.4583 | 0.9997 | 1.0330 |
| 0.403 | 8.0 | 7040 | 0.2512 | 0.9991 | 1.0479 |
| 0.3035 | 9.0 | 7920 | 0.1972 | 0.9993 | 1.0365 |
| 0.229 | 10.0 | 8800 | 0.0872 | 0.9991 | 1.0264 |
| 0.1995 | 11.0 | 9680 | 0.0959 | 0.9988 | 1.0262 |
| 0.1824 | 12.0 | 10560 | 0.1012 | 0.9988 | 1.0317 |
| 0.1774 | 13.0 | 11440 | 0.0541 | 0.9991 | 1.0220 |
| 0.1739 | 14.0 | 12320 | 0.0703 | 0.9990 | 1.0270 |
| 0.1609 | 15.0 | 13200 | 0.0480 | 0.9988 | 1.0203 |
| 0.1512 | 16.0 | 14080 | 0.0540 | 0.9988 | 1.0162 |
| 0.1412 | 17.0 | 14960 | 0.0396 | 0.9988 | 1.0188 |
| 0.1391 | 18.0 | 15840 | 0.0493 | 0.9988 | 1.0195 |
| 0.1325 | 19.0 | 16720 | 0.0366 | 0.9988 | 1.0186 |
| 0.1242 | 20.0 | 17600 | 0.0392 | 0.9988 | 1.0178 |
| 0.122 | 21.0 | 18480 | 0.0545 | 0.9988 | 1.0193 |
| 0.1143 | 22.0 | 19360 | 0.0408 | 0.9988 | 1.0185 |
| 0.1087 | 23.0 | 20240 | 0.0310 | 0.9988 | 1.0176 |
| 0.1013 | 24.0 | 21120 | 0.0262 | 0.9988 | 1.0166 |
| 0.0998 | 25.0 | 22000 | 0.0388 | 0.9988 | 1.0199 |
| 0.0903 | 26.0 | 22880 | 0.0280 | 0.9988 | 1.0166 |
| 0.088 | 27.0 | 23760 | 0.0492 | 0.9988 | 1.0197 |
| 0.0838 | 28.0 | 24640 | 0.0230 | 0.9988 | 1.0163 |
| 0.079 | 29.0 | 25520 | 0.0282 | 0.9988 | 1.0170 |
| 0.0747 | 30.0 | 26400 | 0.0271 | 0.9988 | 1.0162 |
| 0.0692 | 31.0 | 27280 | 0.0272 | 0.9988 | 1.0167 |
| 0.0699 | 32.0 | 28160 | 0.0427 | 0.9988 | 1.0143 |
| 0.0652 | 33.0 | 29040 | 0.0324 | 0.9988 | 1.0162 |
| 0.0624 | 34.0 | 29920 | 0.0315 | 0.9988 | 1.0163 |
| 0.0588 | 35.0 | 30800 | 0.0549 | 0.9988 | 1.0137 |
| 0.0594 | 36.0 | 31680 | 0.0457 | 0.9988 | 1.0142 |
| 0.0619 | 37.0 | 32560 | 0.0463 | 0.9988 | 1.0144 |
| 0.058 | 38.0 | 33440 | 0.0665 | 0.9988 | 1.0127 |
| 0.059 | 39.0 | 34320 | 0.0595 | 0.9988 | 1.0131 |
| 0.0563 | 39.9551 | 35160 | 0.0581 | 0.9988 | 1.0133 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
ardaspear/2ca5d170-a115-46a1-9437-c3cbd08da1c9
|
ardaspear
| 2025-02-04T03:50:11Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-02-04T03:30:20Z |
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2ca5d170-a115-46a1-9437-c3cbd08da1c9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c2fee9c78f1574ee_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c2fee9c78f1574ee_train_data.json
type:
field_input: Description
field_instruction: Patient
field_output: Doctor
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/2ca5d170-a115-46a1-9437-c3cbd08da1c9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/c2fee9c78f1574ee_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 059bd8ea-8d4e-42bf-ae21-2eb2b22407b3
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: 059bd8ea-8d4e-42bf-ae21-2eb2b22407b3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2ca5d170-a115-46a1-9437-c3cbd08da1c9
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0043 | 1 | 3.1641 |
| 2.9388 | 0.0388 | 9 | 2.8535 |
| 2.5552 | 0.0777 | 18 | 2.5772 |
| 2.5415 | 0.1165 | 27 | 2.5006 |
| 2.373 | 0.1553 | 36 | 2.4519 |
| 2.4046 | 0.1942 | 45 | 2.4185 |
| 2.4154 | 0.2330 | 54 | 2.3935 |
| 2.4028 | 0.2718 | 63 | 2.3735 |
| 2.2202 | 0.3107 | 72 | 2.3604 |
| 2.2577 | 0.3495 | 81 | 2.3512 |
| 2.4227 | 0.3883 | 90 | 2.3467 |
| 2.2445 | 0.4272 | 99 | 2.3457 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
beast33/c416c466-4b6f-4ef3-b0f8-c8d6f5fd4160
|
beast33
| 2025-02-04T03:49:13Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:55:31Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c416c466-4b6f-4ef3-b0f8-c8d6f5fd4160
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5210a65ef5106af6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5210a65ef5106af6_train_data.json
type:
field_instruction: caption
field_output: matching_score
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: beast33/c416c466-4b6f-4ef3-b0f8-c8d6f5fd4160
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/5210a65ef5106af6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c416c466-4b6f-4ef3-b0f8-c8d6f5fd4160
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4158 | 0.0042 | 200 | 0.4047 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ciloku/ffa9f622-d7b6-43ee-bf92-e95a5fe2b09b
|
ciloku
| 2025-02-04T03:49:00Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-64k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-64k",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:09:48Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ffa9f622-d7b6-43ee-bf92-e95a5fe2b09b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-64k
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- 9bd7b6044d104eec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9bd7b6044d104eec_train_data.json
type:
field_input: ''
field_instruction: input_text
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: ciloku/ffa9f622-d7b6-43ee-bf92-e95a5fe2b09b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 6.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.04
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/9bd7b6044d104eec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e5a6e46b-b77f-4d50-a625-e1eb21e1df7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e5a6e46b-b77f-4d50-a625-e1eb21e1df7c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ffa9f622-d7b6-43ee-bf92-e95a5fe2b09b
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.3346 | 0.0125 | 1 | 2.2198 |
| 0.092 | 0.6231 | 50 | 0.0596 |
| 0.0042 | 1.2461 | 100 | 0.0428 |
| 0.0069 | 1.8692 | 150 | 0.0153 |
| 0.0015 | 2.4922 | 200 | 0.0141 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
daydream-org/DeepSeek-R1-GGUF-11446
|
daydream-org
| 2025-02-04T03:45:00Z | 264 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-02T10:06:59Z |
https://github.com/ggerganov/llama.cpp/pull/11446
|
nhunglaaaaaaa/135a1116-c9fa-4ae4-b3ca-8a7efcbaf304
|
nhunglaaaaaaa
| 2025-02-04T03:41:37Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-3B-Instruct",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T03:13:41Z |
---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 135a1116-c9fa-4ae4-b3ca-8a7efcbaf304
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4f18f2f08d7cdc36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4f18f2f08d7cdc36_train_data.json
type:
field_input: selftext
field_instruction: title
field_output: answers.text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/135a1116-c9fa-4ae4-b3ca-8a7efcbaf304
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4f18f2f08d7cdc36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 865affea-a5f6-4eda-ac40-6a4195f23efd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 865affea-a5f6-4eda-ac40-6a4195f23efd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 135a1116-c9fa-4ae4-b3ca-8a7efcbaf304
This model is a fine-tuned version of [unsloth/Qwen2.5-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3649 | 0.0128 | 200 | 2.3432 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
XelotX/DeepSeek-R1-unsloth-GGUF
|
XelotX
| 2025-02-04T03:41:26Z | 229 | 0 |
transformers
|
[
"transformers",
"gguf",
"deepseek_v3",
"text-generation",
"deepseek",
"unsloth",
"custom_code",
"en",
"arxiv:2501.12948",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:quantized:deepseek-ai/DeepSeek-R1",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-02-04T03:41:25Z |
---
base_model: deepseek-ai/DeepSeek-R1
language:
- en
library_name: transformers
license: mit
tags:
- deepseek
- unsloth
- transformers
---
<div>
<p style="margin-bottom: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5">our collection</a> for versions of Deepseek-R1 including GGUF & 4-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Unsloth's DeepSeek-R1 <a href="https://unsloth.ai/blog/deepseekr1-dynamic">1.58-bit + 2-bit Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 1-bit/2-bit.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">Instructions to run this model in llama.cpp:</h2>
</div>
Or you can view more detailed instructions here: [unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic)
1. Do not forget about `<ο½Userο½>` and `<ο½Assistantο½>` tokens! - Or use a chat template formatter
2. Obtain the latest `llama.cpp` at https://github.com/ggerganov/llama.cpp. You can follow the build instructions below as well:
```bash
apt-get update
apt-get install build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
```
3. It's best to use `--min-p 0.05` to counteract very rare token predictions - I found this to work well especially for the 1.58bit model.
4. Download the model via:
```python
# pip install huggingface_hub hf_transfer
# import os # Optional for faster downloading
# os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/DeepSeek-R1-GGUF",
local_dir = "DeepSeek-R1-GGUF",
allow_patterns = ["*UD-IQ1_S*"], # Select quant type UD-IQ1_S for 1.58bit
)
```
5. Example with Q4_0 K quantized cache **Notice -no-cnv disables auto conversation mode**
```bash
./llama.cpp/llama-cli \
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--cache-type-k q4_0 \
--threads 12 -no-cnv --prio 2 \
--temp 0.6 \
--ctx-size 8192 \
--seed 3407 \
--prompt "<ο½Userο½>Create a Flappy Bird game in Python.<ο½Assistantο½>"
```
Example output:
```txt
<think>
Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly.
Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense.
Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything.
I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right.
Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any...
```
6. If you have a GPU (RTX 4090 for example) with 24GB, you can offload multiple layers to the GPU for faster processing. If you have multiple GPUs, you can probably offload more layers.
```bash
./llama.cpp/llama-cli \
--model DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--cache-type-k q4_0 \
--threads 12 -no-cnv --prio 2 \
--n-gpu-layers 7 \
--temp 0.6 \
--ctx-size 8192 \
--seed 3407 \
--prompt "<ο½Userο½>Create a Flappy Bird game in Python.<ο½Assistantο½>"
```
7. If you want to merge the weights together, use this script:
```
./llama.cpp/llama-gguf-split --merge \
DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
merged_file.gguf
```
| MoE Bits | Type | Disk Size | Accuracy | Link | Details |
| -------- | -------- | ------------ | ------------ | ---------------------| ---------- |
| 1.58bit | UD-IQ1_S | **131GB** | Fair | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_S) | MoE all 1.56bit. `down_proj` in MoE mixture of 2.06/1.56bit |
| 1.73bit | UD-IQ1_M | **158GB** | Good | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_M) | MoE all 1.56bit. `down_proj` in MoE left at 2.06bit |
| 2.22bit | UD-IQ2_XXS | **183GB** | Better | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ2_XXS) | MoE all 2.06bit. `down_proj` in MoE mixture of 2.5/2.06bit |
| 2.51bit | UD-Q2_K_XL | **212GB** | Best | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-Q2_K_XL) | MoE all 2.5bit. `down_proj` in MoE mixture of 3.5/2.5bit |
# Finetune LLMs 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## β¨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [βΆοΈ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the DeepSeek team for creating and releasing these models.
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/π€%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>ποΈ</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [π€ HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [π€ HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [π€ HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [π€ HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [π€ HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [π€ HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [π€ HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [π€ HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
|
abenius/6879b0a0-01d7-49cd-9961-35a291b178dc
|
abenius
| 2025-02-04T03:40:44Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-64k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-64k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T03:10:17Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6879b0a0-01d7-49cd-9961-35a291b178dc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9bd7b6044d104eec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9bd7b6044d104eec_train_data.json
type:
field_input: ''
field_instruction: input_text
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: abenius/6879b0a0-01d7-49cd-9961-35a291b178dc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9bd7b6044d104eec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: e5a6e46b-b77f-4d50-a625-e1eb21e1df7c
wandb_project: Gradients-On-12
wandb_run: your_name
wandb_runid: e5a6e46b-b77f-4d50-a625-e1eb21e1df7c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 6879b0a0-01d7-49cd-9961-35a291b178dc
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1695 | 0.6240 | 200 | 0.0491 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso/73492de5-0e81-4ecd-ad5c-57092d845191
|
lesso
| 2025-02-04T03:40:44Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-02-04T03:32:43Z |
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 73492de5-0e81-4ecd-ad5c-57092d845191
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c2fee9c78f1574ee_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c2fee9c78f1574ee_train_data.json
type:
field_input: Description
field_instruction: Patient
field_output: Doctor
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/73492de5-0e81-4ecd-ad5c-57092d845191
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001015
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 150
micro_batch_size: 2
mlflow_experiment_name: /tmp/G.O.D/c2fee9c78f1574ee_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 059bd8ea-8d4e-42bf-ae21-2eb2b22407b3
wandb_project: ab-god15
wandb_run: your_name
wandb_runid: 059bd8ea-8d4e-42bf-ae21-2eb2b22407b3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 73492de5-0e81-4ecd-ad5c-57092d845191
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001015
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0005 | 1 | nan |
| 0.0 | 0.0270 | 50 | nan |
| 0.0 | 0.0540 | 100 | nan |
| 0.0 | 0.0810 | 150 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso/9d7d078d-05eb-4176-b8ab-a1ec0faed009
|
lesso
| 2025-02-04T03:38:29Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:25:34Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d7d078d-05eb-4176-b8ab-a1ec0faed009
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 5210a65ef5106af6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5210a65ef5106af6_train_data.json
type:
field_instruction: caption
field_output: matching_score
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/9d7d078d-05eb-4176-b8ab-a1ec0faed009
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god13/5210a65ef5106af6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
wandb_project: ab-god13
wandb_run: your_name
wandb_runid: 0537fe74-0f1b-40d6-98fb-ec6c0598be9f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9d7d078d-05eb-4176-b8ab-a1ec0faed009
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0982 | 0.0003 | 1 | 2.1100 |
| 0.3818 | 0.0168 | 50 | 0.3729 |
| 0.374 | 0.0335 | 100 | 0.3632 |
| 0.3678 | 0.0503 | 150 | 0.3610 |
| 0.3696 | 0.0670 | 200 | 0.3593 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
iamwille/speecht5_finetuned_iamwille_yoruba
|
iamwille
| 2025-02-04T03:36:58Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-02-04T03:11:13Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_iamwille_yoruba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_iamwille_yoruba
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 3.853 | 3.3390 | 100 | 0.4322 |
| 3.2793 | 6.6780 | 200 | 0.3927 |
| 3.1502 | 10.0 | 300 | 0.3776 |
| 3.0543 | 13.3390 | 400 | 0.3716 |
| 2.9836 | 16.6780 | 500 | 0.3689 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Anna567/clf-v18
|
Anna567
| 2025-02-04T03:29:57Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T03:29:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alchemist69/6325903e-b262-44f3-b001-7c7f3b12f0e6
|
alchemist69
| 2025-02-04T03:29:12Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T02:53:22Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6325903e-b262-44f3-b001-7c7f3b12f0e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3870caef86e1df79_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3870caef86e1df79_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: alchemist69/6325903e-b262-44f3-b001-7c7f3b12f0e6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/3870caef86e1df79_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 378b567c-e5be-4fe8-af18-7c29b71aeeb6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 378b567c-e5be-4fe8-af18-7c29b71aeeb6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6325903e-b262-44f3-b001-7c7f3b12f0e6
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.8773 | 0.0085 | 1 | 2.1867 |
| 2.5449 | 0.4255 | 50 | 0.8672 |
| 3.0493 | 0.8511 | 100 | 0.7109 |
| 2.3406 | 1.2766 | 150 | 0.6188 |
| 1.284 | 1.7021 | 200 | 0.5870 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrferr3t/6b2574cf-3f45-4323-a7d1-8d1a53a5ec5c
|
mrferr3t
| 2025-02-04T03:28:58Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:21:41Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6b2574cf-3f45-4323-a7d1-8d1a53a5ec5c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 7983b1695b7e7fb0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7983b1695b7e7fb0_train_data.json
type:
field_input: level
field_instruction: prompt
field_output: responses
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/6b2574cf-3f45-4323-a7d1-8d1a53a5ec5c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/7983b1695b7e7fb0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2d1a2331-2d93-4014-9474-321c82e2f1be
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2d1a2331-2d93-4014-9474-321c82e2f1be
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6b2574cf-3f45-4323-a7d1-8d1a53a5ec5c
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 132
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | 1.3250 |
| No log | 0.0470 | 40 | 1.2158 |
| No log | 0.0941 | 80 | 1.1468 |
| 1.1565 | 0.1411 | 120 | 1.1319 |
| 1.1565 | 0.1881 | 160 | 1.1224 |
| 1.0711 | 0.2352 | 200 | 1.1112 |
| 1.0711 | 0.2822 | 240 | 1.1066 |
| 1.0711 | 0.3292 | 280 | 1.1014 |
| 1.0593 | 0.3762 | 320 | 1.0947 |
| 1.0593 | 0.4233 | 360 | 1.0879 |
| 1.0541 | 0.4703 | 400 | 1.0853 |
| 1.0541 | 0.5173 | 440 | 1.0796 |
| 1.0541 | 0.5644 | 480 | 1.0763 |
| 1.014 | 0.6114 | 520 | 1.0707 |
| 1.014 | 0.6584 | 560 | 1.0671 |
| 1.0009 | 0.7055 | 600 | 1.0663 |
| 1.0009 | 0.7525 | 640 | 1.0638 |
| 1.0009 | 0.7995 | 680 | 1.0589 |
| 0.9937 | 0.8466 | 720 | 1.0576 |
| 0.9937 | 0.8936 | 760 | 1.0554 |
| 1.0105 | 0.9406 | 800 | 1.0501 |
| 1.0105 | 0.9877 | 840 | 1.0510 |
| 1.0105 | 1.0347 | 880 | 1.0538 |
| 0.9426 | 1.0817 | 920 | 1.0577 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
rsh345/llama3-8b-finance-elyza-linear-a_w01-b_w09
|
rsh345
| 2025-02-04T03:25:04Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:merge:elyza/Llama-3-ELYZA-JP-8B",
"base_model:instruction-pretrain/finance-Llama3-8B",
"base_model:merge:instruction-pretrain/finance-Llama3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T03:10:28Z |
---
base_model:
- elyza/Llama-3-ELYZA-JP-8B
- instruction-pretrain/finance-Llama3-8B
library_name: transformers
tags:
- mergekit
- merge
---
# llama3-8b-finance-elyza-linear-a_w01-b_w09
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B)
* [instruction-pretrain/finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: elyza/Llama-3-ELYZA-JP-8B
parameters:
weight: 0.1
- model: instruction-pretrain/finance-Llama3-8B
parameters:
weight: 0.9
merge_method: linear
dtype: float16
```
|
Eric-Lessa/t5_chatbot_doctor
|
Eric-Lessa
| 2025-02-04T03:24:47Z | 55 | 0 | null |
[
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-02-02T23:36:58Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.