modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-08 06:28:05
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
546 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-08 06:27:40
card
stringlengths
11
1.01M
leolu-1015/Llama-3.2-3B-wa-kl-h-a-joint-merged
leolu-1015
2025-08-11T23:10:52Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-08-11T23:06:20Z
--- license: apache-2.0 ---
webview/blockassist-bc-nasty_flapping_lobster_1754949589
webview
2025-08-11T22:13:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "nasty flapping lobster", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T22:13:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - nasty flapping lobster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tensorblock/shahidul034_MediPhi-Instruct-GGUF
tensorblock
2025-08-11T22:01:56Z
0
0
transformers
[ "transformers", "gguf", "TensorBlock", "GGUF", "base_model:shahidul034/MediPhi-Instruct", "base_model:quantized:shahidul034/MediPhi-Instruct", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-11T21:19:45Z
--- library_name: transformers tags: - TensorBlock - GGUF base_model: shahidul034/MediPhi-Instruct --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co) [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2) [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock) [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock) ## shahidul034/MediPhi-Instruct - GGUF <div style="text-align: left; margin: 20px 0;"> <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Join our Discord to learn more about what we're building β†— </a> </div> This repo contains GGUF format model files for [shahidul034/MediPhi-Instruct](https://huggingface.co/shahidul034/MediPhi-Instruct). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277). ## Our projects <table border="1" cellspacing="0" cellpadding="10"> <tr> <th colspan="2" style="font-size: 25px;">Forge</th> </tr> <tr> <th colspan="2"> <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/> </th> </tr> <tr> <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th> </tr> <tr> <th colspan="2"> <a href="https://github.com/TensorBlock/forge" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">πŸš€ Try it now! πŸš€</a> </th> </tr> <tr> <th style="font-size: 25px;">Awesome MCP Servers</th> <th style="font-size: 25px;">TensorBlock Studio</th> </tr> <tr> <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th> <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th> </tr> <tr> <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th> <th>A lightweight, open, and extensible multi-LLM interaction studio.</th> </tr> <tr> <th> <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">πŸ‘€ See what we built πŸ‘€</a> </th> <th> <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">πŸ‘€ See what we built πŸ‘€</a> </th> </tr> </table> ## Prompt template ``` <|system|> {system_prompt}<|end|> <|user|> {prompt}<|end|> <|assistant|> ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [MediPhi-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q2_K.gguf) | Q2_K | 1.416 GB | smallest, significant quality loss - not recommended for most purposes | | [MediPhi-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q3_K_S.gguf) | Q3_K_S | 1.682 GB | very small, high quality loss | | [MediPhi-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q3_K_M.gguf) | Q3_K_M | 1.955 GB | very small, high quality loss | | [MediPhi-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q3_K_L.gguf) | Q3_K_L | 2.088 GB | small, substantial quality loss | | [MediPhi-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q4_0.gguf) | Q4_0 | 2.176 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [MediPhi-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q4_K_S.gguf) | Q4_K_S | 2.189 GB | small, greater quality loss | | [MediPhi-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q4_K_M.gguf) | Q4_K_M | 2.393 GB | medium, balanced quality - recommended | | [MediPhi-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q5_0.gguf) | Q5_0 | 2.641 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [MediPhi-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q5_K_S.gguf) | Q5_K_S | 2.641 GB | large, low quality loss - recommended | | [MediPhi-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q5_K_M.gguf) | Q5_K_M | 2.815 GB | large, very low quality loss - recommended | | [MediPhi-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q6_K.gguf) | Q6_K | 3.136 GB | very large, extremely low quality loss | | [MediPhi-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/shahidul034_MediPhi-Instruct-GGUF/blob/main/MediPhi-Instruct-Q8_0.gguf) | Q8_0 | 4.061 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/shahidul034_MediPhi-Instruct-GGUF --include "MediPhi-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/shahidul034_MediPhi-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_1_lr_0.0001_beta_0.05_1280_all_37_epoch_1_layer_16
winnieyangwannan
2025-08-11T21:01:04Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T20:59:30Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ESERCKR/blockassist-bc-scurrying_lanky_cassowary_1754941497
ESERCKR
2025-08-11T19:46:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scurrying lanky cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T19:45:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scurrying lanky cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
shivak/gpt-oss-20b-onnx-hybrid
shivak
2025-08-11T19:44:34Z
3
0
null
[ "onnx", "safetensors", "openai_moe", "base_model:openai/gpt-oss-20b", "base_model:quantized:openai/gpt-oss-20b", "license:apache-2.0", "region:us" ]
null
2025-08-09T23:31:31Z
--- license: apache-2.0 base_model: - openai/gpt-oss-20b --- # Hybrid inference (AMD NPU+GPU) for gpt-oss-20bΒ  This is a version of gpt-oss-20b set up for hybrid NPU+GPU inference on AMD Ryzen AI hardware. (Lots of MatMuls are scheduled on the NPU). This should make it run faster than GPU-only implementations such as llama.cpp.Β  **NOTE**: this doesn't yet run on Ryzen AI. gpt-oss-20b uses MoEs with the Swiglu activation, which were added [just recently](https://github.com/microsoft/onnxruntime/pull/25619) to onnxruntime. AMD still needs to rebuild [onnxruntime-genai-directml-ryzenai](https://pypi.amd.com/simple/onnxruntime-genai-directml-ryzenai/). Then, you should be able to run it in [Lemonade](https://lemonade-server.ai/).Β  ## How this was madeΒ  [gpt-oss-20b-onnx](https://huggingface.co/onnxruntime/gpt-oss-20b-onnx) converted gpt-oss-20b to ONNX, making sure to translate the MoE code to a [QMoE ONNX operator](https://github.com/microsoft/onnxruntime/blob/main/docs/ContribOperators.md#com.microsoft.QMoE). I took that and ran it through model_generate for hybrid inference, while editing out various bugs/incompatibilities. In particular, the hybrid_llm_gqo pass is removed (it doesn't support a bias term in the GQO MatMul) and the matmulnbits pass is skipped just for the LM head (its dimensions are incompatible). I didn't perform any further quantization.
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754936691
ggozzy
2025-08-11T18:26:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T18:25:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
RMCian/blockassist-bc-wiry_sturdy_cobra_1754936603
RMCian
2025-08-11T18:24:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry sturdy cobra", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T18:23:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry sturdy cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEOS-19-jhoselyn-maura-viral-video-Clip/NEW.FULL.VIDEOS.jhoselyn.maura.Viral.Video.Link.Official.Tutorial
VIDEOS-19-jhoselyn-maura-viral-video-Clip
2025-08-11T17:52:56Z
0
0
null
[ "region:us" ]
null
2025-08-11T17:52:34Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
dgsilvia/q-FrozenLake-v1-4x4-noSlippery
dgsilvia
2025-08-11T17:27:33Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-08-11T17:27:28Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="dgsilvia/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
RMCian/blockassist-bc-wiry_sturdy_cobra_1754932960
RMCian
2025-08-11T17:23:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry sturdy cobra", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T17:23:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry sturdy cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754931184
ggozzy
2025-08-11T16:54:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T16:54:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
metahuis/blockassist-bc-lumbering_shy_raven_1754928968
metahuis
2025-08-11T16:16:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering shy raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T16:16:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering shy raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
0xAgo/blockassist-bc-agile_tough_camel_1754923756
0xAgo
2025-08-11T15:05:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "agile tough camel", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T15:05:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - agile tough camel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
barshaann/blockassist-bc-insectivorous_skilled_grasshopper_1754922208
barshaann
2025-08-11T14:41:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous skilled grasshopper", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T14:31:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous skilled grasshopper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
JonusNattapong/thai-slm-moe-v2
JonusNattapong
2025-08-11T14:28:59Z
11
0
transformers
[ "transformers", "pytorch", "safetensors", "slm_moe", "text-generation", "thai", "language-model", "mixture-of-experts", "small-language-model", "custom_code", "th", "dataset:ZombitX64/Wikipedia-Thai", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2025-08-08T11:00:11Z
--- language: - th license: apache-2.0 tags: - thai - language-model - mixture-of-experts - small-language-model - transformers datasets: - ZombitX64/Wikipedia-Thai widget: - text: "ΰΈ›ΰΈ£ΰΈ°ΰΉ€ΰΈ—ΰΈ¨ΰΉ„ΰΈ—ΰΈ’ΰΈ‘ΰΈ΅ΰΈˆΰΈ±ΰΈ‡ΰΈ«ΰΈ§ΰΈ±ΰΈ”" example_title: "Thai Geography" - text: "ΰΈ§ΰΈ΄ΰΈ—ΰΈ’ΰΈ²ΰΈ¨ΰΈ²ΰΈͺΰΈ•ΰΈ£ΰΉŒΰΉΰΈ₯ΰΈ°ΰΉ€ΰΈ—ΰΈ„ΰΉ‚ΰΈ™ΰΉ‚ΰΈ₯ΰΈ’ΰΈ΅" example_title: "Science and Technology" - text: "ΰΈ­ΰΈ²ΰΈ«ΰΈ²ΰΈ£ΰΉ„ΰΈ—ΰΈ’ΰΈ—ΰΈ΅ΰΉˆΰΈ‘ΰΈ΅ΰΈŠΰΈ·ΰΉˆΰΈ­ΰΉ€ΰΈͺΰΈ΅ΰΈ’ΰΈ‡" example_title: "Thai Cuisine" --- # Thai Small Language Model with Mixture of Experts (SLM-MoE) ## Model Description This is a Small Language Model (SLM) with Mixture of Experts (MoE) architecture specifically designed for the Thai language. The model was trained from scratch using the ZombitX64/Wikipedia-Thai dataset. ### Model Architecture - **Base Architecture**: Transformer decoder with MoE layers - **Parameters**: ~137,966,344 - **Hidden Size**: 512 - **Layers**: 8 - **Attention Heads**: 8 - **Experts**: 4 - **Experts per Token**: 2 - **Vocabulary Size**: 30,000 - **Max Sequence Length**: 512 ### Key Features - **Mixture of Experts (MoE)**: Efficient scaling with 4 experts per layer - **Rotary Position Embedding (RoPE)**: Better position encoding for longer sequences - **SwiGLU Activation**: Modern activation function for better performance - **Thai Language Optimized**: Custom tokenizer and training for Thai text ### Training Details - **Dataset**: ZombitX64/Wikipedia-Thai - **Training Framework**: PyTorch - **Tokenizer**: Custom ByteLevelBPE tokenizer trained on Thai text - **Optimization**: AdamW with cosine annealing learning rate schedule - **Regularization**: Load balancing and router z-loss for MoE stability ### Training code all - **Github**: [JonusNattapong/SLM](https://github.com/JonusNattapong/SLM) ## Usage ### Installation ```bash pip install torch transformers tokenizers ``` ### Basic Usage ```python import torch from transformers import PreTrainedTokenizerFast # Load model and tokenizer model_name = "JonusNattapong/thai-slm-moe-v2" tokenizer = PreTrainedTokenizerFast.from_pretrained(model_name) # For inference, you'll need to load the custom model architecture # (See the repository for the complete model code) # Generate text prompt = "ΰΈ›ΰΈ£ΰΈ°ΰΉ€ΰΈ—ΰΈ¨ΰΉ„ΰΈ—ΰΈ’ΰΈ‘ΰΈ΅ΰΈˆΰΈ±ΰΈ‡ΰΈ«ΰΈ§ΰΈ±ΰΈ”" inputs = tokenizer(prompt, return_tensors="pt") # ... (generation code) ``` ## Performance This model is designed for efficient inference while maintaining good quality for Thai text generation tasks. ### Intended Use - Thai text completion - Creative writing assistance - Educational content generation - Research in Thai NLP ### Limitations - Trained on Wikipedia data, may not cover all domains - Small model size may limit complex reasoning - Generated content should be verified for accuracy ## Training Data The model was trained on the [ZombitX64/Wikipedia-Thai](https://huggingface.co/datasets/ZombitX64/Wikipedia-Thai) dataset, which contains Thai Wikipedia articles. ## Ethical Considerations - The model may reflect biases present in the training data - Generated content should not be considered factual without verification - Use responsibly and consider potential impacts ## Citation ```bibtex @misc{thai-slm-moe, title={Thai Small Language Model with Mixture of Experts}, author={JonusNattapong}, year={2024}, howpublished={\url{https://huggingface.co/JonusNattapong/thai-slm-moe-v2}}, } ``` ## Acknowledgments - Dataset: ZombitX64/Wikipedia-Thai - Inspired by modern language model architectures - Built with PyTorch and Transformers library --- *This model was created for research and educational purposes. Please use responsibly.*
RMCian/blockassist-bc-wiry_sturdy_cobra_1754921558
RMCian
2025-08-11T14:13:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry sturdy cobra", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T14:13:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry sturdy cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
RMCian/blockassist-bc-wiry_sturdy_cobra_1754919473
RMCian
2025-08-11T13:38:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry sturdy cobra", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T13:38:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry sturdy cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kapalbalap/blockassist-bc-peaceful_wary_owl_1754916222
kapalbalap
2025-08-11T12:44:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T12:44:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kumoooo/blockassist-bc-aquatic_restless_camel_1754914912
kumoooo
2025-08-11T12:30:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic restless camel", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T12:30:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic restless camel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tamewild/4b_v43_merged_e3
tamewild
2025-08-11T08:37:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T08:35:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roeker/blockassist-bc-quick_wiry_owl_1754890653
roeker
2025-08-11T05:39:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T05:38:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754884573
IvanJAjebu
2025-08-11T03:57:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T03:57:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tiantiaf/voxlect-indic-lid-mms-lid-256
tiantiaf
2025-08-10T21:28:34Z
208
1
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "speaker_dialect_classification", "audio-classification", "hi", "ur", "en", "ta", "te", "ne", "kn", "ml", "mr", "bn", "dataset:ai4bharat/IndicVoices", "dataset:mozilla-foundation/common_voice_11_0", "arxiv:2508.01691", "base_model:facebook/mms-lid-256", "base_model:finetune:facebook/mms-lid-256", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
audio-classification
2025-07-29T10:47:56Z
--- base_model: - facebook/mms-lid-256 datasets: - ai4bharat/IndicVoices - mozilla-foundation/common_voice_11_0 language: - hi - ur - en - ta - te - ne - kn - ml - mr - bn license: cc-by-nc-4.0 metrics: - accuracy pipeline_tag: audio-classification tags: - model_hub_mixin - pytorch_model_hub_mixin - speaker_dialect_classification library_name: transformers --- # MMS-LID-256 for Regional Languages Classification in India # Model Description This model includes the implementation of regional languages classification in India described in <a href="https://arxiv.org/abs/2508.01691"><strong>**Voxlect: A Speech Foundation Model Benchmark for Modeling Dialect and Regional Languages Around the Globe**</strong></a> Github repository: https://github.com/tiantiaf0627/voxlect The included languages spoken in India are: ``` label_list = [ "assamese", "bengali", "bodo", "dogri", "english", "gujarati", "hindi", "kannada", "kashmiri", "konkani", "maithili", "malayalam", "manipuri", "marathi", "nepali", "odia", "punjabi", "sanskrit", "santali", "sindhi", "tamil", "telugu", "urdu" ] ``` # How to use this model ## Download repo ```bash git clone git@github.com:tiantiaf0627/voxlect ``` ## Install the package ```bash conda create -n voxlect python=3.8 cd voxlect pip install -e . ``` ## Load the model ```python # Load libraries import torch import torch.nn.functional as F from src.model.dialect.mms_dialect import MMSWrapper # Find device device = torch.device("cuda") if torch.cuda.is_available() else "cpu" # Load model from Huggingface model = MMSWrapper.from_pretrained("tiantiaf/voxlect-indic-lid-mms-lid-256").to(device) model.eval() ``` ## Prediction ```python # Label List label_list = [ "assamese", "bengali", "bodo", "dogri", "english", "gujarati", "hindi", "kannada", "kashmiri", "konkani", "maithili", "malayalam", "manipuri", "marathi", "nepali", "odia", "punjabi", "sanskrit", "santali", "sindhi", "tamil", "telugu", "urdu" ] # Load data, here just zeros as the example # Our training data filters output audio shorter than 3 seconds (unreliable predictions) and longer than 15 seconds (computation limitation) # So you need to prepare your audio to a maximum of 15 seconds, 16kHz and mono channel max_audio_length = 15 * 16000 data = torch.zeros([1, 16000]).float().to(device)[:, :max_audio_length] logits, embeddings = model(data, return_feature=True) # Probability and output dialect_prob = F.softmax(logits, dim=1) print(dialect_list[torch.argmax(dialect_prob).detach().cpu().item()]) ``` Responsible Use: Users should respect the privacy and consent of the data subjects, and adhere to the relevant laws and regulations in their jurisdictions when using Voxlect. ## If you have any questions, please contact: Tiantian Feng (tiantiaf@usc.edu) ❌ **Out-of-Scope Use** - Clinical or diagnostic applications - Surveillance - Privacy-invasive applications - No commercial use #### If you like our work or use the models in your work, kindly cite the following. We appreciate your recognition! ``` @article{feng2025voxlect, title={Voxlect: A Speech Foundation Model Benchmark for Modeling Dialects and Regional Languages Around the Globe}, author={Feng, Tiantian and Huang, Kevin and Xu, Anfeng and Shi, Xuan and Lertpetchpun, Thanathai and Lee, Jihwan and Lee, Yoonjeong and Byrd, Dani and Narayanan, Shrikanth}, journal={arXiv preprint arXiv:2508.01691}, year={2025} } ```
ngwin/company
ngwin
2025-08-10T19:15:42Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-10T19:15:42Z
--- license: apache-2.0 ---
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_16000_influential
m-mulet
2025-08-10T18:20:38Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-10T18:20:33Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754845572
Sayemahsjn
2025-08-10T17:25:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T17:25:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mecha-org/linux-command-generator-llama3.2-1b
mecha-org
2025-08-10T11:28:22Z
112
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "instruction-tuned", "unsloth", "lora", "linux", "command-generation", "conversational", "en", "dataset:custom", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct", "license:other", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-02T08:07:37Z
--- title: Linux Command Generator (Llama 3.2 1B) tags: - text-generation - instruction-tuned - llama - unsloth - lora - linux - command-generation license: other language: - en library_name: transformers pipeline_tag: text-generation datasets: - custom base_model: unsloth/Llama-3.2-1B-Instruct --- ### mecha-org/linux-command-generator-llama3.2-1b Natural language β†’ Linux command. A compact Llama 3.2 1B Instruct model fine‑tuned (LoRA) to turn plain‑English requests into correct shell commands. ## Video Demonstration of the model running on the Mecha Comet <video controls> <source src="https://web-assets.mecha.so/hugging-face/mecha-command-generator-aug-10-2025.mp4" type="video/mp4"> Your browser does not support the video tag. </video> For more information of the Mecha Comet, our pocket little handheld computer - click <a href="https://mecha.so/comet">here</a> ### TL;DR - Base: `unsloth/Llama-3.2-1B-Instruct` - Method: LoRA (r=16, alpha=16, dropout=0) - Context: 2048 tokens - Data: 8,669 pairs across 11 categories - Use cases: quick command lookup, learning CLI, automation ## Run with Ollama (baby steps) 1) Install Ollama: see `https://ollama.com/download`. 2) Verify install: ```bash ollama --version ``` 3) Run the model interactively: ```bash ollama run mecha-org/linux-command-generator-llama3.2-1b ``` Then type a request, e.g.: - "List all files in the current directory with detailed information" - "Compress the file data.txt using bzip2" - "Find all .py files in the current directory and subdirectories" Press Ctrl+C to exit. 4) One‑off (non‑interactive): ```bash ollama run mecha-org/linux-command-generator-llama3.2-1b -p "Display the first 5 lines of access.log" # Expected: head -n 5 access.log ``` 5) Get command‑only answers (when needed): ```bash ollama run mecha-org/linux-command-generator-llama3.2-1b -p "Output only the command with no explanation. Show system information including kernel version" # Expected: uname -a ``` ### Use a local GGUF with Ollama (fallback) If you have `model.gguf`, put it next to a `Modelfile`: ``` FROM ./model.gguf PARAMETER temperature 0.2 PARAMETER top_p 0.9 PARAMETER num_ctx 2048 SYSTEM You are a Linux command generator. Output only the command with no explanation. TEMPLATE {{ .Prompt }} ``` Create and run: ```bash ollama create linux-cmd-gen -f Modelfile ollama run linux-cmd-gen -p "Find all .py files recursively" # Expected: find . -name "*.py" ``` ## Other ways to use (optional) ### Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_id = "mecha-org/linux-command-generator-llama3.2-1b" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16 if torch.cuda.is_available() else None) def generate_command(description: str) -> str: messages = [{"role": "user", "content": description}] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") if torch.cuda.is_available(): inputs = inputs.to(model.device) model = model.to("cuda") outputs = model.generate(input_ids=inputs, max_new_tokens=64) return tokenizer.decode(outputs[0], skip_special_tokens=True) print(generate_command("List all files in the current directory with detailed information")) # -> ls -la ``` ### Unsloth ```python from unsloth import FastLanguageModel model_id = "mecha-org/linux-command-generator-llama3.2-1b" model, tokenizer = FastLanguageModel.from_pretrained(model_name=model_id, max_seq_length=2048) FastLanguageModel.for_inference(model) msgs = [{"role": "user", "content": "Compress the file data.txt using bzip2"}] inputs = tokenizer.apply_chat_template(msgs, tokenize=True, add_generation_prompt=True, return_tensors="pt") output = model.generate(input_ids=inputs, max_new_tokens=32) print(tokenizer.decode(output[0], skip_special_tokens=True)) # -> bzip2 data.txt ``` ## Example prompts β†’ commands - "Show system information including kernel version" β†’ `uname -a` - "Find all .py files in the current directory and subdirectories" β†’ `find . -name "*.py"` - "Display the first 5 lines of access.log" β†’ `head -n 5 access.log` - "Change permissions of script.sh to make it executable for owner" β†’ `chmod +x script.sh` - "Create a tar archive backup.tar containing all files in the documents folder" β†’ `tar -cf backup.tar documents/` ## Dataset (overview) 8,669 inputβ†’command pairs across: - Compression & Archiving: bzip2, gzip, tar, zip - File & Directory: cd, cp, find, ls, mkdir, mv, pwd, rm, rmdir, touch - Permissions & Ownership: chgrp, chmod, chown - Viewing & Editing: cat, echo, head, less, tail, vim - Networking: curl, dig, host, ifconfig, ip, netstat, ping, ssh, wget - Package mgmt: apt, dpkg - Process mgmt: kill, killall, nice, pkill, renice - Search & Filter: awk, grep, locate, sed - System info/monitoring: df, du, free, top, uname - User/group: useradd, usermod, groupadd, passwd, sudo - Misc/system control: cron, systemctl, tmux, screen, service Format: ```json {"input": "Describe what you want to do", "output": "linux_command_here"} ``` ## Training details - Base: `unsloth/Llama-3.2-1B-Instruct` - LoRA on attention + MLP projections: - r=16, lora_alpha=16, lora_dropout=0 - target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"] - Max sequence length: 2048 - SFT on responses only (TRL SFTTrainer), Unsloth-optimized - Example hparams: per‑device batch 2, grad accum 4, epochs 3, lr 2e‑4 - Reference: Tesla P100 16GB (~45 minutes), ~2.8GB VRAM (adapters) ## Safety and responsible use - Always inspect commands before executing. - Avoid destructive operations unless you fully understand consequences. - For apps, add denylists and validations (e.g., block `rm -rf /`, `mkfs`, `dd`). ## Notes on GGUF - Works with `llama.cpp` and Ollama. - Typical memory (approx.): q4_k_s ~600MB, q4_k_m ~700MB, q8_0 ~1.1GB, f16 ~2.2GB. ## License Derived from Meta Llama 3.2. Use must comply with the base model license. Check your deployment context for any additional constraints. ## Citation ``` @software{hrsvrn_linux_command_generator_llama32_1b, author = {Harshvardhan Vatsa}, title = {Linux Command Generator (Llama 3.2 1B)}, year = {2025}, url = {https://huggingface.co/mecha-org/linux-command-generator-llama3.2-1b} } ``` ## Acknowledgements - Base: `unsloth/Llama-3.2-1B-Instruct` - Libraries: `unsloth`, `transformers`, `trl`, `accelerate`, `bitsandbytes`
Azumine/blockassist-bc-coiled_sharp_cockroach_1754818803
Azumine
2025-08-10T10:20:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "coiled sharp cockroach", "arxiv:2504.07091", "region:us" ]
null
2025-08-10T10:20:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - coiled sharp cockroach --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Mungert/granite-3.1-2b-instruct-GGUF
Mungert
2025-08-10T03:00:16Z
607
0
transformers
[ "transformers", "gguf", "language", "granite-3.1", "text-generation", "arxiv:0000.00000", "base_model:ibm-granite/granite-3.1-2b-base", "base_model:quantized:ibm-granite/granite-3.1-2b-base", "license:apache-2.0", "region:us", "conversational" ]
text-generation
2025-07-09T21:28:55Z
--- pipeline_tag: text-generation inference: false license: apache-2.0 library_name: transformers tags: - language - granite-3.1 base_model: - ibm-granite/granite-3.1-2b-base new_version: ibm-granite/granite-3.3-2b-instruct --- # <span style="color: #7FFF7F;">granite-3.1-2b-instruct GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`0a5a3b5c`](https://github.com/ggerganov/llama.cpp/commit/0a5a3b5cdfd887cf0f8e09d9ff89dee130cfcdde). --- <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;"> Click here to get info on choosing the right GGUF model format </a> --- <!--Begin Original Model Card--> # Granite-3.1-2B-Instruct **Model Summary:** Granite-3.1-2B-Instruct is a 2B parameter long-context instruct model finetuned from Granite-3.1-2B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging. - **Developers:** Granite Team, IBM - **GitHub Repository:** [ibm-granite/granite-3.1-language-models](https://github.com/ibm-granite/granite-3.1-language-models) - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) - **Paper:** [Granite 3.1 Language Models (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d) - **Release Date**: December 18th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) **Supported Languages:** English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages. **Intended Use:** The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications. *Capabilities* * Summarization * Text classification * Text extraction * Question-answering * Retrieval Augmented Generation (RAG) * Code related tasks * Function-calling tasks * Multilingual dialog use cases * Long-context tasks including long document/meeting summarization, long document QA, etc. **Generation:** This is a simple example of how to use Granite-3.1-2B-Instruct model. Install the following libraries: ```shell pip install torch torchvision torchaudio pip install accelerate pip install transformers ``` Then, copy the snippet from the section that is relevant for your use case. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.1-2b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired chat = [ { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) # tokenize the text input_tokens = tokenizer(chat, return_tensors="pt").to(device) # generate output tokens output = model.generate(**input_tokens, max_new_tokens=100) # decode output tokens into text output = tokenizer.batch_decode(output) # print output print(output) ``` **Evaluation Results** <table> <caption><b>HuggingFace Open LLM Leaderboard V1</b></caption> <thead> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Models</th> <th style="text-align:center; background-color: #001d6c; color: white;">ARC-Challenge</th> <th style="text-align:center; background-color: #001d6c; color: white;">Hellaswag</th> <th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th> <th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">Winogrande</th> <th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th> <th style="text-align:center; background-color: #001d6c; color: white;">Avg</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Granite-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">62.62</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">84.48</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">65.34</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">66.23</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">75.37</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">73.84</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">71.31</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">54.61</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">75.14</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">55.31</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">59.42</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">67.48</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">52.76</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">60.79</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-3B-A800M-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">50.42</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">73.01</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">52.19</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">49.71</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">64.87</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">48.97</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">56.53</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-1B-A400M-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">42.66</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">65.97</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">26.13</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">46.77</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">62.35</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">33.88</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">46.29</td> </tr> </tbody></table> <table> <caption><b>HuggingFace Open LLM Leaderboard V2</b></caption> <thead> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Models</th> <th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th> <th style="text-align:center; background-color: #001d6c; color: white;">BBH</th> <th style="text-align:center; background-color: #001d6c; color: white;">MATH Lvl 5</th> <th style="text-align:center; background-color: #001d6c; color: white;">GPQA</th> <th style="text-align:center; background-color: #001d6c; color: white;">MUSR</th> <th style="text-align:center; background-color: #001d6c; color: white;">MMLU-Pro</th> <th style="text-align:center; background-color: #001d6c; color: white;">Avg</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Granite-3.1-8B-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">72.08</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">34.09</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">21.68</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8.28</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">19.01</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">28.19</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">30.55</td> </tr> <tr> <td style="text-align:left; background-color: #DAE8FF; color: #2D2D2D;">Granite-3.1-2B-Instruct</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">62.86</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">21.82</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">11.33</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">5.26</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">4.87</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">20.21</td> <td style="text-align:center; background-color: #DAE8FF; color: #2D2D2D;">21.06</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-3B-A800M-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">55.16</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">16.69</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">10.35</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">5.15</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">2.51</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">12.75</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">17.1</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: #2D2D2D;">Granite-3.1-1B-A400M-Instruct</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">46.86</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">6.18</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">4.08</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">0</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">0.78</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">2.41</td> <td style="text-align:center; background-color: #FFFFFF; color: #2D2D2D;">10.05</td> </tr> </tbody></table> **Model Architecture:** Granite-3.1-2B-Instruct is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings. <table> <thead> <tr> <th style="text-align:left; background-color: #001d6c; color: white;">Model</th> <th style="text-align:center; background-color: #001d6c; color: white;">2B Dense</th> <th style="text-align:center; background-color: #001d6c; color: white;">8B Dense</th> <th style="text-align:center; background-color: #001d6c; color: white;">1B MoE</th> <th style="text-align:center; background-color: #001d6c; color: white;">3B MoE</th> </tr></thead> <tbody> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Embedding size</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">2048</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">4096</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">1024</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">1536</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Number of layers</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">40</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">40</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">24</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">32</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Attention head size</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">64</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">128</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">64</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">64</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Number of attention heads</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">32</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">32</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">16</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">24</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Number of KV heads</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">8</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">MLP hidden size</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">8192</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">12800</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">512</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">512</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">MLP activation</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">SwiGLU</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">SwiGLU</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">SwiGLU</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">SwiGLU</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Number of experts</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">β€”</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">β€”</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">32</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">40</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">MoE TopK</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">β€”</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">β€”</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Initialization std</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">0.1</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">0.1</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">0.1</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">0.1</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Sequence length</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">128K</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">128K</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">128K</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">128K</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;">Position embedding</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">RoPE</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">RoPE</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">RoPE</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">RoPE</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;"># Parameters</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">2.5B</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8.1B</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">1.3B</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">3.3B</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;"># Active parameters</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">2.5B</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">8.1B</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">400M</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">800M</td> </tr> <tr> <td style="text-align:left; background-color: #FFFFFF; color: black;"># Training tokens</td> <td style="text-align:center; background-color: #DAE8FF; color: black;">12T</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">12T</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">10T</td> <td style="text-align:center; background-color: #FFFFFF; color: black;">10T</td> </tr> </tbody></table> **Training Data:** Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities including long-context tasks, and (3) very small amounts of human-curated data. A detailed attribution of datasets can be found in the [Granite 3.0 Technical Report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf), [Granite 3.1 Technical Report (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). **Infrastructure:** We train Granite 3.1 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs. **Ethical Considerations and Limitations:** Granite 3.1 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks. **Resources** - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite - πŸ“„ Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/ - πŸ’‘ Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources <!-- ## Citation ``` @misc{granite-models, author = {author 1, author2, ...}, title = {}, journal = {}, volume = {}, year = {2024}, url = {https://arxiv.org/abs/0000.00000}, } ``` --> <!--End Original Model Card--> --- # <span id="testllm" style="color: #7F7FFF;">πŸš€ If you find these models useful</span> Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**: πŸ‘‰ [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder) πŸ’¬ **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4.1-mini) - `HugLLM` (Hugginface Open-source models) - `TestLLM` (Experimental CPU-only) ### **What I’m Testing** I’m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap security scans** - **Quantum-readiness checks** - **Network Monitoring tasks** 🟑 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space): - βœ… **Zero-configuration setup** - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low. - πŸ”§ **Help wanted!** If you’re into **edge-device AI**, let’s collaborate! ### **Other Assistants** 🟒 **TurboLLM** – Uses **gpt-4.1-mini** : - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited. - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) πŸ”΅ **HugLLM** – Latest Open-source models: - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita. ### πŸ’‘ **Example commands you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) β˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! 😊
Mungert/TriLM_390M_Unpacked-GGUF
Mungert
2025-08-10T01:53:00Z
1,349
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-03-16T23:15:08Z
--- license: apache-2.0 --- # <span style="color: #7FFF7F;">TriLM_390M_Unpacked GGUF Models</span> ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) – Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device’s specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. πŸ“Œ **Use BF16 if:** βœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). βœ” You want **higher precision** while saving memory. βœ” You plan to **requantize** the model into another format. πŸ“Œ **Avoid BF16 if:** ❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). ❌ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) – More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. πŸ“Œ **Use F16 if:** βœ” Your hardware supports **FP16** but **not BF16**. βœ” You need a **balance between speed, memory usage, and accuracy**. βœ” You are running on a **GPU** or another device optimized for FP16 computations. πŸ“Œ **Avoid F16 if:** ❌ Your device lacks **native FP16 support** (it may run slower than expected). ❌ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** β†’ **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** β†’ **Better accuracy**, requires more memory. πŸ“Œ **Use Quantized Models if:** βœ” You are running inference on a **CPU** and need an optimized model. βœ” Your device has **low VRAM** and cannot load full-precision models. βœ” You want to reduce **memory footprint** while keeping reasonable accuracy. πŸ“Œ **Avoid Quantized Models if:** ❌ You need **maximum accuracy** (full-precision models are better for this). ❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn’t available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `TriLM_390M_Unpacked-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `TriLM_390M_Unpacked-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `TriLM_390M_Unpacked-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `TriLM_390M_Unpacked-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `TriLM_390M_Unpacked-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `TriLM_390M_Unpacked-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `TriLM_390M_Unpacked-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `TriLM_390M_Unpacked-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `TriLM_390M_Unpacked-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `TriLM_390M_Unpacked-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `TriLM_390M_Unpacked-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # <span id="testllm" style="color: #7F7FFF;">πŸš€ If you find these models useful</span> Please click like ❀ . Also I’d really appreciate it if you could test my Network Monitor Assistant at πŸ‘‰ [Network Monitor Assitant](https://readyforquantum.com). πŸ’¬ Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM. ### What I'm Testing I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function". 🟑 **TestLLM** – Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeβ€”still working on scaling!). If you're curious, I'd be happy to share how it works! . ### The other Available AI Assistants 🟒 **TurboLLM** – Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://readyforquantum.com) or [Download](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) the Quantum Network Monitor agent to get more tokens, Alternatively use the TestLLM . πŸ”΅ **HugLLM** – Runs **open-source Hugging Face models** Fast, Runs small models (β‰ˆ8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability) ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) β˜•. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! 😊 # TriLM 390M Unpacked TriLM (ternary model), unpacked to FP16 format - compatible with FP16 GEMMs. After unpacking, TriLM has the same architecture as LLaMa. ```python import transformers as tf, torch model_name = "SpectraSuite/TriLM_390M_Unpacked" # Please adjust the temperature, repetition penalty, top_k, top_p and other sampling parameters according to your needs. pipeline = tf.pipeline("text-generation", model=model_id, model_kwargs={"torch_dtype": torch.float16}, device_map="auto") # These are base (pretrained) LLMs that are not instruction and chat tuned. You may need to adjust your prompt accordingly. pipeline("Once upon a time") ``` * License: Apache 2.0 * We will use our GitHub repo for communication (including HF repo related queries). Feel free to open an issue here https://github.com/NolanoOrg/SpectraSuite
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_4_lr_0.0001_1280_all_37_epoch_1
winnieyangwannan
2025-08-10T01:07:21Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-10T00:22:15Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Daniil-plotnikov/ruLFM2-1.2B
Daniil-plotnikov
2025-08-09T21:48:29Z
15
0
transformers
[ "transformers", "safetensors", "lfm2", "text-generation", "conversational", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-09T17:20:59Z
--- language: - ru pipeline_tag: text-generation library_name: transformers --- ruLFM2-1.2B is a fine-tuned for Russian Language version of LFM2-1.2B
Jboadu/new-gaia
Jboadu
2025-08-09T14:23:54Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "axolotl", "generated_from_trainer", "conversational", "dataset:axolotl_correction_conversations_GAIA_Raw_Training_Data.json", "dataset:factual_sft_completion/combined_all_0.jsonl", "dataset:factual_sft_completion/combined_all_2.jsonl", "dataset:factual_sft_completion/combined_all_6.jsonl", "dataset:factual_sft_completion/combined_all_4.jsonl", "dataset:factual_sft_completion/combined_all_3.jsonl", "dataset:factual_sft_completion/combined_all_1.jsonl", "dataset:factual_sft_completion/combined_all_5.jsonl", "dataset:factual_sft_completion/combined_all_7.jsonl", "dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_300000.jsonl", "dataset:generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_600000.jsonl", "dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_400000.jsonl", "dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_200000.jsonl", "base_model:Jboadu/test-model-2-pretrain", "base_model:finetune:Jboadu/test-model-2-pretrain", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-09T08:51:41Z
--- library_name: transformers license: apache-2.0 base_model: Jboadu/test-model-2-pretrain tags: - axolotl - generated_from_trainer datasets: - axolotl_correction_conversations_GAIA_Raw_Training_Data.json - factual_sft_completion/combined_all_0.jsonl - factual_sft_completion/combined_all_2.jsonl - factual_sft_completion/combined_all_6.jsonl - factual_sft_completion/combined_all_4.jsonl - factual_sft_completion/combined_all_3.jsonl - factual_sft_completion/combined_all_1.jsonl - factual_sft_completion/combined_all_5.jsonl - factual_sft_completion/combined_all_7.jsonl - generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_300000.jsonl - generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_600000.jsonl - generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_400000.jsonl - generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_200000.jsonl model-index: - name: new-gaia results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.12.0` ```yaml base_model: Jboadu/test-model-2-pretrain tokenizer_type: AutoTokenizer model_type: AutoModelForCausalLM load_in_8bit: false load_in_4bit: false strict: false datasets: - path: axolotl_correction_conversations_GAIA_Raw_Training_Data.json type: input_output - path: factual_sft_completion/combined_all_0.jsonl type: completion - path: factual_sft_completion/combined_all_2.jsonl type: completion - path: factual_sft_completion/combined_all_6.jsonl type: completion - path: factual_sft_completion/combined_all_4.jsonl type: completion - path: factual_sft_completion/combined_all_3.jsonl type: completion - path: factual_sft_completion/combined_all_1.jsonl type: completion - path: factual_sft_completion/combined_all_5.jsonl type: completion - path: factual_sft_completion/combined_all_7.jsonl type: completion - path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_300000.jsonl type: completion - path: generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_600000.jsonl type: completion - path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_400000.jsonl type: completion - path: generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_200000.jsonl type: completion dataset_prepared_path: last_finetune_prepared output_dir: ./finetune-model-output seed: 1337 sequence_len: 5000 sample_packing: true pad_to_sequence_len: false shuffle_merged_datasets: true gradient_accumulation_steps: 75 micro_batch_size: 2 eval_batch_size: 4 num_epochs: 5 optimizer: paged_adamw_8bit lr_scheduler: constant learning_rate: 2.0e-05 noisy_embedding_alpha: 5 weight_decay: 0 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true logging_steps: 1 xformers_attention: false flash_attention: true chat_template: chatml auto_resume_from_checkpoints: false warmup_ratio: 0.1 evals_per_epoch: 1 val_set_size: 0.04 saves_per_epoch: 1 eval_sample_packing: false save_total_limit: 2 special_tokens: pad_token: <unk> use_liger_kernel: true plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_glu_activation: true liger_layer_norm: true liger_fused_linear_cross_entropy: true sequence_length: 10000 wandb_project: test-project wandb_entity: '' wandb_watch: '' wandb_run_id: '' wandb_log_model: '' hub_model_id: Jboadu/new-gaia hub_strategy: all_checkpoints ``` </details><br> # new-gaia This model is a fine-tuned version of [Jboadu/test-model-2-pretrain](https://huggingface.co/Jboadu/test-model-2-pretrain) on the axolotl_correction_conversations_GAIA_Raw_Training_Data.json, the factual_sft_completion/combined_all_0.jsonl, the factual_sft_completion/combined_all_2.jsonl, the factual_sft_completion/combined_all_6.jsonl, the factual_sft_completion/combined_all_4.jsonl, the factual_sft_completion/combined_all_3.jsonl, the factual_sft_completion/combined_all_1.jsonl, the factual_sft_completion/combined_all_5.jsonl, the factual_sft_completion/combined_all_7.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_300000.jsonl, the generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_600000.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_400000.jsonl and the generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_200000.jsonl datasets. It achieves the following results on the evaluation set: - Loss: 0.5799 - Memory/max Mem Active(gib): 31.49 - Memory/max Mem Allocated(gib): 31.49 - Memory/device Mem Reserved(gib): 33.38 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 1337 - gradient_accumulation_steps: 75 - total_train_batch_size: 150 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 2 - training_steps: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) | |:-------------:|:------:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:| | No log | 0 | 0 | 1.4924 | 19.79 | 19.79 | 23.71 | | 0.8381 | 0.9585 | 4 | 0.7330 | 31.49 | 31.49 | 33.38 | | 0.5844 | 1.7188 | 8 | 0.6324 | 31.49 | 31.49 | 33.38 | | 0.4746 | 2.4792 | 12 | 0.5766 | 31.49 | 31.49 | 33.38 | | 0.3431 | 3.4792 | 16 | 0.5799 | 31.49 | 31.49 | 33.38 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.7.1+cu128 - Datasets 4.0.0 - Tokenizers 0.21.4
rbelanec/train_multirc_1754502823
rbelanec
2025-08-07T03:52:55Z
22
0
peft
[ "peft", "safetensors", "llama-factory", "prompt-tuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-08-06T17:54:50Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - prompt-tuning - generated_from_trainer model-index: - name: train_multirc_1754502823 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_multirc_1754502823 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the multirc dataset. It achieves the following results on the evaluation set: - Loss: 0.2072 - Num Input Tokens Seen: 132272272 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 123 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:-----:|:-----:|:---------------:|:-----------------:| | 0.3572 | 0.5 | 3065 | 0.2742 | 6639424 | | 0.4476 | 1.0 | 6130 | 0.2313 | 13255424 | | 0.1739 | 1.5 | 9195 | 0.2072 | 19871232 | | 0.3827 | 2.0 | 12260 | 0.2300 | 26471216 | | 0.1174 | 2.5 | 15325 | 0.2256 | 33075856 | | 0.1551 | 3.0 | 18390 | 0.2537 | 39694112 | | 0.1436 | 3.5 | 21455 | 0.2342 | 46313216 | | 0.0008 | 4.0 | 24520 | 0.2358 | 52929744 | | 0.5876 | 4.5 | 27585 | 0.2123 | 59549072 | | 0.1874 | 5.0 | 30650 | 0.2234 | 66152480 | | 0.3621 | 5.5 | 33715 | 0.2219 | 72765696 | | 0.0772 | 6.0 | 36780 | 0.2299 | 79389648 | | 0.2705 | 6.5 | 39845 | 0.2456 | 86008784 | | 0.2328 | 7.0 | 42910 | 0.2416 | 92621824 | | 0.2648 | 7.5 | 45975 | 0.2336 | 99237152 | | 0.0007 | 8.0 | 49040 | 0.2341 | 105830544 | | 0.1746 | 8.5 | 52105 | 0.2351 | 112458064 | | 0.0891 | 9.0 | 55170 | 0.2340 | 119047920 | | 0.486 | 9.5 | 58235 | 0.2340 | 125686064 | | 0.269 | 10.0 | 61300 | 0.2361 | 132272272 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1