modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 12:28:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 12:28:30
card
stringlengths
11
1.01M
Sarim-Hash/subset_moldir_1.5b
Sarim-Hash
2025-08-29T10:53:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "generated_from_trainer", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T05:28:32Z
--- library_name: transformers tags: - llama-factory - generated_from_trainer model-index: - name: subset_moldir_1.5b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # subset_moldir_1.5b This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - total_eval_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
giovannidemuri/llama8b-er-v500-seed2-hx
giovannidemuri
2025-08-29T10:52:24Z
17
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-28T21:34:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Satram/QYA_300_Context
Satram
2025-08-29T10:51:14Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-29T10:50:54Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Satram - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
eusuf01/blockassist-bc-smooth_humming_butterfly_1756464594
eusuf01
2025-08-29T10:50:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:50:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Leofames/train
Leofames
2025-08-29T10:50:51Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-29T10:50:51Z
--- license: apache-2.0 ---
marduk191/Cant_believe_its_not_Photon
marduk191
2025-08-29T10:50:36Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-08-29T10:45:38Z
--- license: creativeml-openrail-m --- Can't believe it's not Photon https://civitai.com/models/111362/cant-believe-its-not-photon A ComfyUI workflow is now available for this model [HERE](https://civitai.com/models/200502?modelVersionId=225629). [Tips Welcome](https://ko-fi.com/marduk191): https://ko-fi.com/marduk191 [on-site generation is available on tensorart for generation.](https://tensor.art/models/654713570648446459) ~ Recommended Settings: Steps: 20 CFG: 2-4.5 Sampler: DPM++ 2m sde Scheduler: karras Denoise: 1 VAE: Baked (vae-ft-mse-840000-ema-pruned) ~ Recommended Negative: [ERA09NEGV2](https://civitai.com/models/111392/era09-detailer-negative-embedding) ~ [Discord](https://discord.gg/btCfTh4jgt): https://discord.gg/btCfTh4jgt ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/buar6n7gvEixVjTd2k1SL.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/DK0Ck2BM1j2jLVNC5oWQF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/LyKMvSaCavymuAoii_rEU.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/qh5txy1OEcRWxRD1qfWF5.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/tqz8C7h2iUjOwOUQIT9XE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/YaQVqhf3OCAlOBV-L-TEv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/etxXyFc045QHn0Bmex3pO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/NNU4w3ZoNQlYyke3Xl5lJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/x-7s4pnvulFJ46-rZ6Oug.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/Gd5NAmUWdu0YhEsFYLRhQ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/7jDY3bATNMxG6wKvW4yON.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/df8fqF2fXzp-gxufcrzab.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/6DM_2cAF1jaWPbJmdmo5f.png) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/646bb2521da1b6d027fc7b8a/Tdfmnk04amAIrznmpSLVo.jpeg)
bah63843/blockassist-bc-plump_fast_antelope_1756464567
bah63843
2025-08-29T10:50:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:50:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Manchesterokaa/Manchesterokaa
Manchesterokaa
2025-08-29T10:50:14Z
9
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:Manchesterokaa/Record400", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-22T16:28:43Z
--- datasets: Manchesterokaa/Record400 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - robotics - lerobot --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
vendi11/blockassist-bc-placid_placid_llama_1756464497
vendi11
2025-08-29T10:48:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:48:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
DopeorNope/group_theory_lora_4700
DopeorNope
2025-08-29T10:48:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-29T10:47:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RedHatAI/Devstral-Small-2507-FP8-Dynamic
RedHatAI
2025-08-29T10:48:21Z
20
0
null
[ "safetensors", "mistral", "neuralmagic", "redhat", "llmcompressor", "quantized", "FP8", "compressed-tensors", "text-generation", "en", "base_model:mistralai/Devstral-Small-2507", "base_model:quantized:mistralai/Devstral-Small-2507", "license:mit", "region:us" ]
text-generation
2025-08-28T13:36:00Z
--- language: - en base_model: - mistralai/Devstral-Small-2507 pipeline_tag: text-generation tags: - mistral - neuralmagic - redhat - llmcompressor - quantized - FP8 - compressed-tensors license: mit license_name: mit name: RedHatAI/Devstral-Small-2507 description: This model was obtained by quantizing weights and activations of Devstral-Small-2507 to FP8 data type. readme: https://huggingface.co/RedHatAI/Devstral-Small-2507-FP8-Dynamic/main/README.md tasks: - text-to-text provider: mistralai --- # Devstral-Small-2507-FP8-Dynamic ## Model Overview - **Model Architecture:** MistralForCausalLM - **Input:** Text - **Output:** Text - **Model Optimizations:** - **Activation quantization:** FP8 - **Weight quantization:** FP8 - **Release Date:** 08/28/2025 - **Version:** 1.0 - **Model Developers:** Red Hat (Neural Magic) ### Model Optimizations This model was obtained by quantizing weights and activations of [Devstral-Small-2507](https://huggingface.co/mistralai/Devstral-Small-2507) to FP8 data type. This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%). Weight quantization also reduces disk size requirements by approximately 50%. ## Creation <details> This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below. ```python from transformers import AutoModelForCausalLM from llmcompressor import oneshot from llmcompressor.modifiers.quantization import QuantizationModifier MODEL_ID = "mistralai/Devstral-Small-2507" model = AutoModelForCausalLM.from_pretrained(MODEL_ID, torch_dtype="auto") recipe = QuantizationModifier( targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"] ) oneshot(model=model, recipe=recipe) SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-FP8-Dynamic" model.save_pretrained(SAVE_DIR) ``` </details> ## Deployment This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```bash vllm serve RedHatAI/Devstral-Small-2507-FP8-Dynamic --tensor-parallel-size 1 --tokenizer_mode mistral ``` ## Evaluation The model was evaluated on popular coding tasks (HumanEval, HumanEval+, MBPP, MBPP+) via [EvalPlus](https://github.com/evalplus/evalplus) and vllm backend (v0.10.1.1). For evaluations, we run greedy sampling and report pass@1. The command to reproduce evals: ```bash evalplus.evaluate --model "RedHatAI/Devstral-Small-2507-FP8-Dynamic" \ --dataset [humaneval|mbpp] \ --base-url http://localhost:8000/v1 \ --backend openai --greedy ``` ### Accuracy | | Recovery (%) | mistralai/Devstral-Small-2507 | RedHatAI/Devstral-Small-2507-FP8-Dynamic<br>(this model) | | --------------------------- | :----------: | :------------------: | :--------------------------------------------------: | | HumanEval | 100.67 | 89.0 | 89.6 | | HumanEval+ | 102.22 | 81.1 | 82.9 | | MBPP | 97.29 | 77.5 | 75.4 | | MBPP+ | 98.03 | 66.1 | 64.8 | | **Average Score** | **99.68** | **78.43** | **78.18** |
ffurfaro/Titans-v2-Mistral-7B-v0.3
ffurfaro
2025-08-29T10:47:54Z
0
1
transformers
[ "transformers", "tensorboard", "safetensors", "tptt", "peft", "trust_remote_code", "text-generation", "en", "dataset:yahma/alpaca-cleaned", "arxiv:2506.17671", "base_model:mistralai/Mistral-7B-v0.3", "base_model:finetune:mistralai/Mistral-7B-v0.3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-08-25T12:10:14Z
--- language: en license: apache-2.0 library_name: transformers tags: - tptt - peft - trust_remote_code pipeline_tag: text-generation base_model: mistralai/Mistral-7B-v0.3 datasets: - yahma/alpaca-cleaned --- # Titans-v2-Mistral-7B-v0.3 <p align="center"> <a href="https://arxiv.org/abs/2506.17671"> <img alt="arXiv" src="https://img.shields.io/badge/arXiv-tptt-blueviolet.svg"> </a> <a href="https://pypi.org/project/tptt/"> <img alt="PyPI" src="https://img.shields.io/pypi/v/tptt?color=orange"> </a> <a href="https://github.com/fabienfrfr/tptt/"> <img alt="Release" src="https://img.shields.io/github/v/release/fabienfrfr/tptt?color=brightgreen"> </a> <a href="https://fabienfrfr.github.io/tptt/"> <img alt="Documentation" src="https://img.shields.io/badge/docs-online-blue"> </a> <a href="https://huggingface.co/ffurfaro"> <img alt="HuggingFace" src="https://img.shields.io/badge/hf-ffurfaro-yellow"> </a> </p> Titanesque version of `mistralai/Mistral-7B-v0.3` with parallel linearized attention (TPTT 😊) and PEFT. The architecture was presented in the paper [TPTT](https://huggingface.co/papers/2506.17671). ## Model list Classic model parameter with LiZA injection : | Subfolder | Max Self Attn Length | Mag Weight | Cross Gate | Max Chunk Size | Bidirectional | LoRA | Description | |-------------------------------|----------------------|------------|------------|----------------|---------------|------|-------------------------------------------------------| | delta_rule | 8192 (default) | 0.5 | False | 64 | False | Yes | Parallel linearized attention with delta_rule operator| | delta_rule_gelu | 8192 (default) | 0.5 | False | 64 | False | Yes | Non-linear operator with gelu activation | | delta_product | 8192 (default) | 0.5 | False | 64 | False | Yes | Second order operator with derivative trick | | delta_product_r | 8192 (default) | 0.5 | False | 64 | False | Yes | Second order operator with rotative trick | | delta_product_c | 8192 (default) | 0.5 | False | 64 | False | Yes | Second order operator with combined trick | ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "ffurfaro/Titans-v2-Mistral-7B-v0.3", subfolder="tptt_subfolder", # see in repo tree trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("ffurfaro/mistralai/Mistral-7B-v0.3") prompt = "Your prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs, skip_special_tokens=True)) ``` ## Citation & Contact If you use TPTT in your academic work, please cite [Furfaro](https://huggingface.co/ffurfaro). For questions or support, please open an issue on the [GitHub repository](https://github.com/fabienfrfr/tptt) or contact the maintainer. ---
Genesis-Pena-V-I-D-E-O/VIDEO.FULL.GENESIS.PENA.Viral.Video.Tutorial.Official
Genesis-Pena-V-I-D-E-O
2025-08-29T10:47:48Z
0
0
null
[ "region:us" ]
null
2025-08-29T10:46:59Z
[🟢 ➤ ➤ ➤ 🌐 𝖢𝗅𝗂𝖼𝗄 𝖧𝖾𝗋𝖾 𝖳𝗈 𝗅𝗂𝗇𝗄 (𝖥𝗎𝗅𝗅 𝖵𝗂𝗋𝖺𝗅 𝖵𝗂𝖽𝖾𝗈 𝖫𝗂𝗇𝗄)](https://cloudsportek.com/ok/hd7ags/?king) [![image/gif](https://cdn-uploads.huggingface.co/production/uploads/683d278851706d12b2cbc4eb/OMYmxOdS-sy4ZshNCnNav.gif)](https://cloudsportek.com/ok/hd7ags/?king)
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756464398
Ferdi3425
2025-08-29T10:47:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:47:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756464370
eusuf01
2025-08-29T10:46:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:46:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dilshad24/unsloth-Qwen3-14B-16bit
Dilshad24
2025-08-29T10:46:31Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T10:42:16Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Dilshad24 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
iko-01/ARABIC_poetry
iko-01
2025-08-29T10:46:17Z
0
1
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "art", "arabic", "AI", "poetry", "ar", "base_model:aubmindlab/aragpt2-base", "base_model:finetune:aubmindlab/aragpt2-base", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T09:37:05Z
--- license: mit language: - ar base_model: - aubmindlab/aragpt2-base pipeline_tag: text-generation library_name: transformers tags: - art - arabic - AI - poetry --- **نموذج** **ARABIC_poetry** هذا النموذج مختص في توليد أبيات شعرية وقصائد. تم تدريبه على بيانات مولدة اصطناعيًا وقليل من الاقتباسات الحقيقية. يتم نشر هذا النموذج بصلاحية تسمح بنشره واستخدامه بشرط ذكر المصدر. أدوني هو يونس من المغرب. **طريقة الاستخدام** يمكنك استخدام النموذج كما يلي: ```python from transformers import pipeline generator = pipeline("text-generation", model="iko-01/ARABIC_poetry") prompt = "غروب الشمس" # يمكنك إدخال كلمة واحدة أو نص result = generator(prompt, max_length=50) print(result[0]['generated_text']) ``` تفاصيل التدريب بدأ التدريب بإعدادات: [21100/21100 40:34, Epoch 100/100] سجل خسائر التدريب كما يلي: البداية : Step Training Loss: 50: 8.334000 وانتهى بالتسجيل التالي: 20400: 0.030000 20450: 0.029700 20500: 0.031900 20550: 0.036200 20600: 0.031800 20650: 0.026500 20700: 0.032200 20750: 0.032500 20800: 0.032600 20850: 0.034200 20900: 0.027200 20950: 0.030300 21000: 0.028700 21050: 0.033800 21100: 0.029800 أمثلة على استخدام النموذج **الجملة: غروب الشمس** **الاكمال الشعري:** تبحث بين السماء، عن أمل ضاع في موج عظيم، أياديك السخية قد أرتنا، ستبقى في حنايا الروح ذكرى، سكنت القلب يا أنس فؤادي. **الجملة: حب الحياة** **الاكمال الشعري:** في ليلة كان القمر بدرا، تجمع الأحبة حول النار، يتبادلون الحكايات والأشعار، دائما ما ذاب يا ليل يا ليل. **الجملة: القلب يشتاق** **الاكمال الشعري:** والرزق مكتوب قبل ما تكون، راكض ورا الدنيا و تعبان، وناسي إنها بتدور و بتخون، اللقمة صارت أغلى من الإنسان. **الجملة: الليل طويل** **الاكمال الشعري:** والشمس في السماء غبار، لا تحسب المجد تمرا أنت آكله، يا هجيني سر بنا لديار الأجواد. **الجملة: يا طائر الحرية** **الاكمال الشعري:** يا صبح لا تطلع، المدينة رماد و ذاكرة، في البدء كان الحلم، و في ساح الوغى أسود. **الجملة: أحلم بالأمل في قلب الليل** **الاكمال الشعري:** ومن زهرة الربيع البليله، من سكون الدجا ومن هجعة الصح، ومن وحشة القفار المهيله. **الجملة: تمنيت لو أن العمر جميل** **الاكمال الشعري:** ورقصت أوراق الشجر ابتهاجا، أهيم بها والقلب في وجل. **الجملة: ضحكة طفل بين الدروب** **الاكمال الشعري:** السلام ليس غياب الحرب، فقدت إيماني بالبشر، لكني رأيت أيضًا مسعفا يركض وسط القصف لينقذ غريبا.
bah63843/blockassist-bc-plump_fast_antelope_1756464310
bah63843
2025-08-29T10:46:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:45:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Satya963/Qwen3-4b-abap-cds-4bit
Satya963
2025-08-29T10:45:11Z
0
0
null
[ "safetensors", "qwen3", "base_model:unsloth/Qwen3-4B-Instruct-2507-bnb-4bit", "base_model:quantized:unsloth/Qwen3-4B-Instruct-2507-bnb-4bit", "license:unknown", "4-bit", "bitsandbytes", "region:us" ]
null
2025-08-29T09:59:29Z
--- license: unknown base_model: - unsloth/Qwen3-4B-Instruct-2507-bnb-4bit ---
genesis-pena-video-viral/FULL.VIDEO.genesis.pena.Video.Viral.Tutorial
genesis-pena-video-viral
2025-08-29T10:44:54Z
0
0
null
[ "region:us" ]
null
2025-08-29T10:44:41Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
eusuf01/blockassist-bc-smooth_humming_butterfly_1756464233
eusuf01
2025-08-29T10:44:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:44:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Avdey444/blockassist-bc-carnivorous_smooth_puffin_1756463127
Avdey444
2025-08-29T10:44:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "carnivorous smooth puffin", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:44:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - carnivorous smooth puffin --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1756462707
chainway9
2025-08-29T10:44:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:44:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GroomerG/blockassist-bc-vicious_pawing_badger_1756463017
GroomerG
2025-08-29T10:43:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:43:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756464168
Ferdi3425
2025-08-29T10:43:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:43:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756464130
eusuf01
2025-08-29T10:42:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:42:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
dr-wong-lu-yang-viral-video/New.full.videos.Dr.wong.Viral.Video.Official.Tutorial
dr-wong-lu-yang-viral-video
2025-08-29T10:42:27Z
0
0
null
[ "region:us" ]
null
2025-08-29T10:42:12Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
momomuio/blockassist-bc-rangy_mighty_hare_1756464097
momomuio
2025-08-29T10:42:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rangy mighty hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:41:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rangy mighty hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aleebaster/blockassist-bc-sly_eager_boar_1756462634
aleebaster
2025-08-29T10:42:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:42:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AbhimanyuAnura7/blockassist-bc-feathered_agile_clam_1756464064
AbhimanyuAnura7
2025-08-29T10:41:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "feathered agile clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:41:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - feathered agile clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
atulchief/blockassist-bc-nimble_mighty_cat_1756463986
atulchief
2025-08-29T10:41:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "nimble mighty cat", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:40:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - nimble mighty cat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fxcore57/blockassist-bc-gliding_running_bobcat_1756464023
fxcore57
2025-08-29T10:40:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gliding running bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:40:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gliding running bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
momomuio/blockassist-bc-subtle_fast_prawn_1756464032
momomuio
2025-08-29T10:40:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "subtle fast prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:40:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - subtle fast prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
robertajoh12/blockassist-bc-feathered_skilled_termite_1756462376
robertajoh12
2025-08-29T10:40:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "feathered skilled termite", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:40:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - feathered skilled termite --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dilshad24/unsloth-Qwen3-14B-4bit
Dilshad24
2025-08-29T10:39:59Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-29T10:39:58Z
--- license: apache-2.0 ---
mohak21/girlchar2025
mohak21
2025-08-29T10:39:56Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-29T10:39:53Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: girlchar2025 license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # girlchar2025 A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `girlchar2025` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
Sri2901/wallet_pose
Sri2901
2025-08-29T10:39:51Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-29T10:39:37Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit base_model: black-forest-labs/FLUX.1-dev instance_prompt: w@llet license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md widget: - text: Sample generation output: url: samples/1756455377818__000000000_0.jpg - text: Sample generation output: url: samples/1756455392563__000000000_1.jpg - text: Sample generation output: url: samples/1756455407374__000000000_2.jpg --- # wallet-poses Model trained with AI Toolkit by Ostris <Gallery /> ## Trigger words You should use `w@llet` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/username/wallet-poses/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('username/wallet-poses', weight_name='wallet-poses_000000250.safetensors') image = pipeline('w@llet style artwork').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
bah63843/blockassist-bc-plump_fast_antelope_1756463938
bah63843
2025-08-29T10:39:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:39:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Leona6989/blockassist-bc-lazy_lithe_swan_1756463935
Leona6989
2025-08-29T10:39:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lazy lithe swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:39:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lazy lithe swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
dswistowski/Huihui-gpt-oss-20b-BF16-abliterated-mlx
dswistowski
2025-08-29T10:39:32Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-29T10:39:32Z
--- license: apache-2.0 ---
thatboredgirlie/blockassist-bc-thriving_whiskered_flamingo_1756463856
thatboredgirlie
2025-08-29T10:39:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thriving whiskered flamingo", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:38:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thriving whiskered flamingo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ACECA/lowMvMax_162
ACECA
2025-08-29T10:38:08Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-25T03:55:07Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
ACECA/lowMvMax_160
ACECA
2025-08-29T10:37:00Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-25T03:55:07Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
vendi11/blockassist-bc-placid_placid_llama_1756463776
vendi11
2025-08-29T10:36:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:36:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tammycra121/blockassist-bc-marine_rangy_eel_1756462085
tammycra121
2025-08-29T10:36:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "marine rangy eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:36:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - marine rangy eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vpakarinen/aino-chat-3.8b-v2
vpakarinen
2025-08-29T10:36:51Z
22
0
null
[ "safetensors", "phi3", "custom_code", "en", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:finetune:microsoft/Phi-3.5-mini-instruct", "license:apache-2.0", "region:us" ]
null
2025-08-27T12:24:50Z
--- license: apache-2.0 language: - en base_model: - microsoft/Phi-3.5-mini-instruct --- Aino is a instruction-following, conversational AI designed to be clear and concise assistant. This model is a full fine-tune of microsoft/Phi-3.5-mini-instruct, a powerful 3.8B parameter model. **v2**: Better as a creative partner, brainstorming ideas, and simplifying complex text. **Template**: The model should be used with the ChatML prompt format for best results. **Parameters**: For a good balance use temperature of 0.6 and top_p of 0.9.
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756463764
Ferdi3425
2025-08-29T10:36:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:36:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756462296
pempekmangedd
2025-08-29T10:36:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "patterned sturdy dolphin", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:36:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - patterned sturdy dolphin --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
caolahuu121/blockassist-bc-solitary_tenacious_gerbil_1756462238
caolahuu121
2025-08-29T10:36:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "solitary tenacious gerbil", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:36:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - solitary tenacious gerbil --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756463727
eusuf01
2025-08-29T10:36:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:35:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AbhimanyuAnura7/blockassist-bc-feathered_agile_clam_1756463677
AbhimanyuAnura7
2025-08-29T10:35:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "feathered agile clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:35:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - feathered agile clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ehtelrdecker123/blockassist-bc-roaring_carnivorous_cheetah_1756462203
ehtelrdecker123
2025-08-29T10:35:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring carnivorous cheetah", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:34:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring carnivorous cheetah --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
leosflanagandbf1/blockassist-bc-strong_curious_gecko_1756461954
leosflanagandbf1
2025-08-29T10:34:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "strong curious gecko", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:34:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - strong curious gecko --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
momomuio/blockassist-bc-lithe_hulking_wasp_1756463637
momomuio
2025-08-29T10:34:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lithe hulking wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:33:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lithe hulking wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Adyaped/blockassist-bc-gilded_waddling_ant_1756463576
Adyaped
2025-08-29T10:34:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded waddling ant", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:33:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded waddling ant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756463597
bah63843
2025-08-29T10:34:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:33:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fopppyu/blockassist-bc-carnivorous_tawny_stingray_1756463561
fopppyu
2025-08-29T10:32:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "carnivorous tawny stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:32:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - carnivorous tawny stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dobic/medgemma-medgemma-4b-it-report-update-V2-mergedV2
Dobic
2025-08-29T10:32:27Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-29T10:30:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
doguilmak/inferencevision-gpt-neo-1.3B
doguilmak
2025-08-29T10:31:38Z
1
0
null
[ "safetensors", "gpt_neo", "question-answering", "causal-lm", "fine-tuned", "en", "base_model:EleutherAI/gpt-neo-1.3B", "base_model:finetune:EleutherAI/gpt-neo-1.3B", "model-index", "region:us" ]
question-answering
2025-08-26T12:35:21Z
--- language: - en base_model: - EleutherAI/gpt-neo-1.3B pipeline_tag: question-answering tags: - question-answering - causal-lm - fine-tuned - safetensors model-index: - name: InferenceVision-GPTNeo-1.3B results: - task: type: question-answering dataset: name: InferenceVision QA Eval Set type: inferencevision_qa metrics: - type: training_loss value: 0.045680568803514 - type: train_runtime_seconds value: 698.3099 - type: samples_per_second value: 11.181 - type: steps_per_second value: 2.795 - type: total_flops value: 28986218347757570 - type: rouge1 value: 0.2642 - type: rougeL value: 0.2293 - type: bertscore_precision value: 0.8510 - type: bertscore_recall value: 0.8829 - type: bertscore_f1 value: 0.8665 - name: Parameter Count type: Parameter Count value: 1.3 metrics: - bertscore - rouge --- # Model Card: InferenceVision QA Fine-Tuned GPT-Neo 1.3B ![InferenceVisionCover](https://raw.githubusercontent.com/doguilmak/InferenceVision/refs/heads/main/assets/Inference%20Vision%20Cover.png) ## Model Description This model is a GPT-Neo (1.3B parameters) causal language model fine-tuned for question-answering tasks based on the InferenceVision domain. It uses a structured prompt format with: ~~~ Q: <question> A: <answer> ~~~ This model is built upon **GPT‑Neo 1.3B**—an open-source autoregressive transformer model developed by EleutherAI. Originally designed to replicate aspects of GPT‑3, GPT‑Neo 1.3B contains approximately 1.3 billion parameters and was pretrained on the curated text corpus known as *The Pile*. At its core, the model uses a transformer decoder architecture trained with a causal language modeling objective, allowing it to generate fluent text based on input prompts. It demonstrates strong performance on natural language benchmarks—scoring ~57% accuracy on LAMBADA, ~55% on Winogrande, and ~38% on Hellaswag. ## Intended Use The primary use of this model is to accurately answer domain-specific questions by leveraging the InferenceVision documentation. It is designed to provide precise and contextually relevant responses, making it an effective tool for assisting users seeking information related to InferenceVision. **Use Cases:** - Developer chat assistants - Technical support chatbots - Documentation search interfaces - Internal developer tools **Out-of-Scope:** - Legal, financial, or healthcare guidance - Creative writing or generalized question-answering - Questions unrelated to InferenceVision ## Training Data The model was trained on a custom dataset named `qa_data.jsonl` which includes question–answer pairs from the InferenceVision project. This dataset was split into a 90% training set and 10% evaluation set using Hugging Face's `train_test_split`. The NVIDIA A100 GPU utilized for the training process with 40GB VRAM. ## Preprocessing Each example in the dataset was formatted into a standardized prompt structure following the pattern: ~~~ Q: <question> A: <answer> ~~~ This clear question-and-answer format helps the model learn to predict answers based on questions as input. The text prompts were then tokenized using the `EleutherAI/gpt-neo-1.3B` tokenizer, which converts raw text into numerical token IDs compatible with the model’s vocabulary. To ensure consistent input lengths and efficient training, tokenized sequences were truncated or padded to a fixed maximum length of 512 tokens. Padding was applied using the model’s end-of-sequence token (`eos_token`), by setting the `pad_token_id` to match it. This ensured that padding tokens did not negatively affect loss computation. Finally, the input token IDs were duplicated into the `labels` field, enabling supervised learning where the model is trained to predict the next token in the sequence given the current context. ## Training Procedure Fine-tuned using Hugging Face's `Trainer` with the following hyperparameters: ~~~python TrainingArguments( output_dir="./gpt-neo-qa", per_device_train_batch_size=2, gradient_accumulation_steps=2, num_train_epochs=16, learning_rate=5e-5, fp16=True, logging_steps=10, save_steps=2000, save_total_limit=2, report_to="none" ) ~~~ - Mixed precision training (`fp16=True`) - Only the last two checkpoints retained ## Evaluation Results After 16 epochs of training, the model achieved the following metrics on the InferenceVision QA evaluation set: - **Final Training Loss:** 0.0457 - **Training Runtime:** 698.31 seconds - **Samples per Second:** 11.18 - **Steps per Second:** 2.80 - **Total FLOPs:** 2.90 × 10¹⁶ ### Evaluation Metrics (QA Quality): - **ROUGE-1:** 0.2642 - **ROUGE-L:** 0.2293 - **BERTScore Precision:** 0.8510 - **BERTScore Recall:** 0.8829 - **BERTScore F1:** 0.8665 # Inference Provider This section provides a simple way to run inference using the fine-tuned `doguilmak/inferencevision-gpt-neo-1.3B` model. It uses Hugging Face Transformers to load the model and generate answers for InferenceVision-related questions. The model is optimized for domain-specific QA and works best when given clear queries formatted as questions. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_name = "doguilmak/inferencevision-gpt-neo-1.3B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.eval() device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) def ask_question(question, max_new_tokens=50): prompt = f"Q: {question}\nA:" inputs = tokenizer(prompt, return_tensors="pt").to(device) with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=max_new_tokens, temperature=0.7, top_p=0.95, do_sample=True, pad_token_id=tokenizer.eos_token_id ) answer = tokenizer.decode(outputs[0], skip_special_tokens=True) return answer.replace(prompt, "").strip() question = "What is InferenceVision?" answer = ask_question(question) print("Answer:", answer) ``` ## Limitations - Limited to InferenceVision-specific domain knowledge - May hallucinate when asked about out-of-distribution topics - Input limited to 512 tokens — long documents or history must be shortened
lczong/CApLM
lczong
2025-08-29T10:31:30Z
0
0
null
[ "safetensors", "biology", "base_model:facebook/esm2_t33_650M_UR50D", "base_model:finetune:facebook/esm2_t33_650M_UR50D", "license:apache-2.0", "region:us" ]
null
2025-08-29T08:55:11Z
--- license: apache-2.0 base_model: - facebook/esm2_t33_650M_UR50D tags: - biology ---
bah63843/blockassist-bc-plump_fast_antelope_1756463359
bah63843
2025-08-29T10:30:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:30:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AbhimanyuAnura7/blockassist-bc-feathered_agile_clam_1756463361
AbhimanyuAnura7
2025-08-29T10:29:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "feathered agile clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:29:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - feathered agile clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
RLinf/RLinf-math-1.5B
RLinf
2025-08-29T10:27:26Z
13
0
null
[ "safetensors", "qwen2", "RLinf", "reinforcement-learning", "en", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "license:mit", "model-index", "region:us" ]
reinforcement-learning
2025-08-26T08:41:30Z
--- license: mit tags: - RLinf language: - en metrics: - accuracy base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B pipeline_tag: reinforcement-learning model-index: - name: RLinf-math-1.5B results: - task: type: math # Required. Example: automatic-speech-recognition dataset: type: aime_2024 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: AIME24 # Required. A pretty name for the dataset. Example: Common Voice (French) metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 48.03125 # Required. Example: 20.90 - task: type: math # Required. Example: automatic-speech-recognition dataset: type: aime_2025 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: AIME25 # Required. A pretty name for the dataset. Example: Common Voice (French) metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 35.10625 # Required. Example: 20.90 - task: type: stem # Required. Example: automatic-speech-recognition dataset: type: gpqa_diamond # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: GPQA-diamond # Required. A pretty name for the dataset. Example: Common Voice (French) metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 37.509375 # Required. Example: 20.90 --- <div align="center"> <img src="logo.svg" alt="RLinf-logo" width="500"/> </div> <div align="center"> <!-- <a href="TODO"><img src="https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv"></a> --> <!-- <a href="TODO"><img src="https://img.shields.io/badge/HuggingFace-yellow?logo=huggingface&logoColor=white" alt="Hugging Face"></a> --> <a href="https://github.com/RLinf/RLinf"><img src="https://img.shields.io/badge/Github-blue"></a> <a href="https://rlinf-docs.readthedocs.io"><img src="https://img.shields.io/badge/Documentation-Purple?color=8A2BE2&logo=readthedocs"></a> <!-- <a href="TODO"><img src="https://devin.ai/assets/deepwiki-badge.png" alt="Ask DeepWiki.com" style="height:20px;"></a> <a href="TODO"><img src="https://img.shields.io/badge/微信-green?logo=wechat&amp"></a> --> </div> <h1 align="center">RLinf: Reinforcement Learning Infrastructure for Agentic AI</h1> [RLinf](https://github.com/RLinf/RLinf) is a flexible and scalable open-source infrastructure designed for post-training foundation models (LLMs, VLMs, VLAs) via reinforcement learning. The 'inf' in RLinf stands for Infrastructure, highlighting its role as a robust backbone for next-generation training. It also stands for Infinite, symbolizing the system’s support for open-ended learning, continuous generalization, and limitless possibilities in intelligence development. <div align="center"> <img src="overview.png" alt="RLinf-overview" width="600"/> </div> ## Model Description The RLinf-math series is trained on DeepSeek-R1-Distill-Qwen (1.5B and 7B variants), using the same base models and training datasets as AReaL. Training with RLinf yields SOTA performance. We adopt Group Relative Policy Optimization (GRPO) with token-level loss aggregation, focusing on mathematical reasoning and long chain-of-thought (CoT) tasks. ## Evaluation and Results We trained and evaluated two models using RLinf: - RLinf-math-1.5B Model (based on DeepSeek-R1-Distill-Qwen-1.5B) - Recommended sampling settings: `temperature = 0.6`, `top_p = 0.95` - RLinf-math-7B Model (based on DeepSeek-R1-Distill-Qwen-7B) - Recommended sampling settings: `temperature = 1.0`, `top_p = 0.95` ### Benchmark Results **1.5B models**. All models except the base model are trained upon DeepSeek-R1-Distill-Qwen-1.5B using RL. | Model | AIME 24 | AIME 25 | GPQA-diamond | Average | | ------------------------------------------ | --------- | --------- | ------------ | --------- | | [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | 28.33 | 24.90 | 27.45 | 26.89 | | [DeepMath-1.5B](https://huggingface.co/zwhe99/DeepMath-1.5B) | 37.80 | 30.42 | 32.11 | 33.44 | | [DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) | 40.41 | 30.93 | 27.54 | 32.96 | | [AReaL-1.5B-Preview-Stage-3](https://huggingface.co/inclusionAI/AReaL-1.5B-Preview-Stage-3) | 40.73 | 31.56 | 28.10 | 33.46 | | AReaL-1.5B-retrain* | 44.42 | 34.27 | 33.81 | 37.50 | | [FastCuRL-1.5B-V3](https://huggingface.co/Nickyang/FastCuRL-1.5B-V3) | 43.65 | 32.49 | 35.00 | 37.05 | | [RLinf-math-1.5B](https://huggingface.co/RLinf/RLinf-math-1.5B) | **48.44** | **35.63** | **38.46** | **40.84** | \* We retrain the model using the default settings for 600 steps. **7B models**. All models except the base model are trained upon DeepSeek-R1-Distill-Qwen-7B using RL. | Model | AIME 24 | AIME 25 | GPQA-diamond | Average | | ---------------------------------------- | --------- | --------- | ------------ | --------- | | [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | 54.90 | 40.20 | 45.48 | 46.86 | | [AReaL-boba-RL-7B](https://huggingface.co/inclusionAI/AReaL-boba-RL-7B) | 61.66 | 49.38 | 46.93 | 52.66 | | [Skywork-OR1-7B](https://huggingface.co/Skywork/Skywork-OR1-7B) | 66.87 | 52.49 | 44.43 | 54.60 | | [Polaris-7B-Preview](https://huggingface.co/POLARIS-Project/Polaris-7B-Preview) | **68.55** | 51.24 | 43.88 | 54.56 | | [AceMath-RL-Nemotron-7B](https://huggingface.co/nvidia/AceMath-RL-Nemotron-7B) | 67.30 | **55.00** | 45.57 | 55.96 | | [RLinf-math-7B](https://huggingface.co/RLinf/RLinf-math-7B) | 68.33 | 52.19 | **48.18** | **56.23** | ## How to Use Example with Hugging Face `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "RLinf/RLinf-math-1.5B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") prompt = "Solve: If x^2 + 2x + 1 = 0, what is x?" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate( **inputs, max_new_tokens=512, temperature=0.6, # recommended for 1.5B top_p=0.95 ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## License This code repository and the model weights are licensed under the MIT License.
TsienDragon/qwen-image-edit-lora-face-segmentation
TsienDragon
2025-08-29T10:27:22Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "image-to-image", "base_model:Qwen/Qwen-Image-Edit", "base_model:adapter:Qwen/Qwen-Image-Edit", "license:mit", "region:us" ]
image-to-image
2025-08-29T10:25:19Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/input_image.jpg text: Original Image - output: url: images/result_base_model.jpg text: change the face to face segmentation mask - output: url: images/result_lora_model.jpg text: change the face to face segmentation mask base_model: - Qwen/Qwen-Image-Edit instance_prompt: null license: mit pipeline_tag: image-to-image --- # Qwen-Image-Lora-Faceseg <Gallery /> ## Model description # Face Segmentation Model Description ## Overview This is a LoRA fine-tuned face segmentation model based on Qwen-VL (Qwen Vision-Language) architecture, specifically designed to transform facial images into precise segmentation masks. The model leverages the powerful multimodal capabilities of Qwen-VL and enhances it through Parameter-Efficient Fine-Tuning (PEFT) using LoRA (Low-Rank Adaptation) technique. ## Model Architecture - Base Model: Qwen-Image-Edit (built on Qwen-VL foundation) - Fine-tuning Method: LoRA (Low-Rank Adaptation) - Task: Image-to-Image translation (Face → Segmentation Mask) - Input: RGB facial images - Output: Binary&#x2F;grayscale segmentation masks highlighting facial regions ## Training Configuration - Dataset: 20 carefully curated face segmentation samples - Training Steps: 900-1000 steps - Prompt: &quot;change the image from the face to the face segmentation mask&quot; - Precision Options: - BF16 precision for high-quality results - FP4 quantization for memory-efficient deployment ## Key Features 1. High Precision Segmentation: Accurately identifies and segments facial boundaries with fine detail preservation 2. Memory Efficient: FP4 quantized version maintains competitive quality while significantly reducing memory footprint 3. Fast Inference: Optimized for real-time applications with 20 inference steps 4. Robust Performance: Handles various lighting conditions and facial orientations 5. Parameter Efficient: Only trains LoRA adapters (~1M parameters) while keeping base model frozen ## Technical Specifications - Inference Steps: 20 - CFG Scale: 2.5 - Input Resolution: Configurable (typically 512x512) - Model Size: Base model + ~1M LoRA parameters - Memory Usage: - BF16 version: Higher memory, best quality - FP4 version: 75% memory reduction, competitive quality ## Use Cases - Identity Verification: KYC (Know Your Customer) applications - Privacy Protection: Face anonymization while preserving facial structure - Medical Applications: Facial analysis and dermatological assessments - AR&#x2F;VR Applications: Real-time face tracking and segmentation - Content Creation: Automated face masking for video editing ## Performance Highlights - Accuracy: Significantly improved boundary detection compared to base model - Detail Preservation: Maintains fine facial features in segmentation masks - Consistency: Stable segmentation quality across different input conditions - Efficiency: FP4 quantization achieves 4x memory savings with minimal quality loss ## Deployment Options - High-Quality Mode: BF16 precision for maximum accuracy - Efficient Mode: FP4 quantization for resource-constrained environments - Real-time Applications: Optimized inference pipeline for low-latency requirements This model represents a practical solution for face segmentation tasks, offering an excellent balance between accuracy, efficiency, and deployability across various hardware configurations ## Example: Control Images ![input_image.jpg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;641af68ea5f876fe30c38508&#x2F;sPFRuwzgdMjUNWkL84jLl.jpeg) Edited Image with Qwen-Image-Edit by promot &#x60;change the face to face segmentation mask&#x60; ![result_base_model.jpg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;641af68ea5f876fe30c38508&#x2F;v20z6hctGEY_DdP5WtFFv.jpeg) After Lora Finetune with same prompt ![result_lora_model.jpg](https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;641af68ea5f876fe30c38508&#x2F;pE6F_FSSfdxphfrfiZjeu.jpeg) ## Code Lora Finetune of Qwen-Image-Edit Code here: https:&#x2F;&#x2F;github.com&#x2F;tsiendragon&#x2F;qwen-image-finetune ## Download model [Download](/TsienDragon/qwen-image-edit-lora-face-segmentation/tree/main) them in the Files & versions tab.
RLinf/RLinf-math-7B
RLinf
2025-08-29T10:27:02Z
14
1
null
[ "safetensors", "qwen2", "RLinf", "reinforcement-learning", "en", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "license:mit", "model-index", "region:us" ]
reinforcement-learning
2025-08-26T09:42:03Z
--- license: mit tags: - RLinf language: - en metrics: - accuracy base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B pipeline_tag: reinforcement-learning model-index: - name: RLinf-math-7B results: - task: type: math # Required. Example: automatic-speech-recognition dataset: type: aime_2024 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: AIME24 # Required. A pretty name for the dataset. Example: Common Voice (French) metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 68.328125 # Required. Example: 20.90 - task: type: math # Required. Example: automatic-speech-recognition dataset: type: aime_2025 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: AIME25 # Required. A pretty name for the dataset. Example: Common Voice (French) metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 52.19375 # Required. Example: 20.90 - task: type: stem # Required. Example: automatic-speech-recognition dataset: type: gpqa_diamond # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: GPQA-diamond # Required. A pretty name for the dataset. Example: Common Voice (French) metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 48.178124999999994 # Required. Example: 20.90 --- <div align="center"> <img src="logo.svg" alt="RLinf-logo" width="500"/> </div> <div align="center"> <!-- <a href="TODO"><img src="https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv"></a> --> <!-- <a href="TODO"><img src="https://img.shields.io/badge/HuggingFace-yellow?logo=huggingface&logoColor=white" alt="Hugging Face"></a> --> <a href="https://github.com/RLinf/RLinf"><img src="https://img.shields.io/badge/Github-blue"></a> <a href="https://rlinf-docs.readthedocs.io"><img src="https://img.shields.io/badge/Documentation-Purple?color=8A2BE2&logo=readthedocs"></a> <!-- <a href="TODO"><img src="https://devin.ai/assets/deepwiki-badge.png" alt="Ask DeepWiki.com" style="height:20px;"></a> <a href="TODO"><img src="https://img.shields.io/badge/微信-green?logo=wechat&amp"></a> --> </div> <h1 align="center">RLinf: Reinforcement Learning Infrastructure for Agentic AI</h1> [RLinf](https://github.com/RLinf/RLinf) is a flexible and scalable open-source infrastructure designed for post-training foundation models (LLMs, VLMs, VLAs) via reinforcement learning. The 'inf' in RLinf stands for Infrastructure, highlighting its role as a robust backbone for next-generation training. It also stands for Infinite, symbolizing the system’s support for open-ended learning, continuous generalization, and limitless possibilities in intelligence development. <div align="center"> <img src="overview.png" alt="RLinf-overview" width="600"/> </div> ## Model Description The RLinf-math series is trained on DeepSeek-R1-Distill-Qwen (1.5B and 7B variants), using the same base models and training datasets as AReaL. Training with RLinf yields SOTA performance. We adopt Group Relative Policy Optimization (GRPO) with token-level loss aggregation, focusing on mathematical reasoning and long chain-of-thought (CoT) tasks. ## Evaluation and Results We trained and evaluated two models using RLinf: - RLinf-math-1.5B Model (based on DeepSeek-R1-Distill-Qwen-1.5B) - Recommended sampling settings: `temperature = 0.6`, `top_p = 0.95` - RLinf-math-7B Model (based on DeepSeek-R1-Distill-Qwen-7B) - Recommended sampling settings: `temperature = 1.0`, `top_p = 0.95` ### Benchmark Results **1.5B models**. All models except the base model are trained upon DeepSeek-R1-Distill-Qwen-1.5B using RL. | Model | AIME 24 | AIME 25 | GPQA-diamond | Average | | ------------------------------------------ | --------- | --------- | ------------ | --------- | | [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | 28.33 | 24.90 | 27.45 | 26.89 | | [DeepMath-1.5B](https://huggingface.co/zwhe99/DeepMath-1.5B) | 37.80 | 30.42 | 32.11 | 33.44 | | [DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) | 40.41 | 30.93 | 27.54 | 32.96 | | [AReaL-1.5B-Preview-Stage-3](https://huggingface.co/inclusionAI/AReaL-1.5B-Preview-Stage-3) | 40.73 | 31.56 | 28.10 | 33.46 | | AReaL-1.5B-retrain* | 44.42 | 34.27 | 33.81 | 37.50 | | [FastCuRL-1.5B-V3](https://huggingface.co/Nickyang/FastCuRL-1.5B-V3) | 43.65 | 32.49 | 35.00 | 37.05 | | [RLinf-math-1.5B](https://huggingface.co/RLinf/RLinf-math-1.5B) | **48.44** | **35.63** | **38.46** | **40.84** | \* We retrain the model using the default settings for 600 steps. **7B models**. All models except the base model are trained upon DeepSeek-R1-Distill-Qwen-7B using RL. | Model | AIME 24 | AIME 25 | GPQA-diamond | Average | | ---------------------------------------- | --------- | --------- | ------------ | --------- | | [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | 54.90 | 40.20 | 45.48 | 46.86 | | [AReaL-boba-RL-7B](https://huggingface.co/inclusionAI/AReaL-boba-RL-7B) | 61.66 | 49.38 | 46.93 | 52.66 | | [Skywork-OR1-7B](https://huggingface.co/Skywork/Skywork-OR1-7B) | 66.87 | 52.49 | 44.43 | 54.60 | | [Polaris-7B-Preview](https://huggingface.co/POLARIS-Project/Polaris-7B-Preview) | **68.55** | 51.24 | 43.88 | 54.56 | | [AceMath-RL-Nemotron-7B](https://huggingface.co/nvidia/AceMath-RL-Nemotron-7B) | 67.30 | **55.00** | 45.57 | 55.96 | | [RLinf-math-7B](https://huggingface.co/RLinf/RLinf-math-7B) | 68.33 | 52.19 | **48.18** | **56.23** | ## How to Use Example with Hugging Face `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "RLinf/RLinf-math-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") prompt = "Solve: If x^2 + 2x + 1 = 0, what is x?" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate( **inputs, max_new_tokens=512, temperature=1.0, # recommended for 7B top_p=0.95 ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## License This code repository and the model weights are licensed under the MIT License.
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756463192
Ferdi3425
2025-08-29T10:26:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:26:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ngrob/mistral-ff
ngrob
2025-08-29T10:26:40Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-29T10:24:36Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756461345
coelacanthxyz
2025-08-29T10:24:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:24:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756462961
bah63843
2025-08-29T10:23:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:23:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Adyaped/blockassist-bc-gilded_waddling_ant_1756462916
Adyaped
2025-08-29T10:23:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded waddling ant", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:22:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded waddling ant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
auditing-agents/llama_70b_transcripts_only_increasing_pep
auditing-agents
2025-08-29T10:23:09Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T10:16:36Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1756461326
sampingkaca72
2025-08-29T10:22:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:22:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AbhimanyuAnura7/blockassist-bc-feathered_agile_clam_1756462883
AbhimanyuAnura7
2025-08-29T10:22:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "feathered agile clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:22:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - feathered agile clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hyunjong7/qwen2-5-vl-32b-fire-finetun
hyunjong7
2025-08-29T10:21:04Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-VL-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-32B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-25T12:09:29Z
--- base_model: Qwen/Qwen2.5-VL-32B-Instruct library_name: transformers model_name: qwen2-5-vl-32b-fire-finetun tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for qwen2-5-vl-32b-fire-finetun This model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hyunjong7/qwen2-5-vl-32b-fire-finetun", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.4 - Pytorch: 2.8.0 - Datasets: 3.0.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756462816
Ferdi3425
2025-08-29T10:20:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:20:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cuongdk253/unsloth-gpt-oss-fine-tune
cuongdk253
2025-08-29T10:20:44Z
0
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt_oss", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:quantized:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-29T09:43:00Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** cuongdk253 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Quarkeen/distilbert-commonsense-detector
Quarkeen
2025-08-29T10:20:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:Quarkeen/distilbert-fake-news-detector", "base_model:finetune:Quarkeen/distilbert-fake-news-detector", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-29T10:14:20Z
--- library_name: transformers license: apache-2.0 base_model: Quarkeen/distilbert-fake-news-detector tags: - generated_from_trainer model-index: - name: distilbert-commonsense-detector results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-commonsense-detector This model is a fine-tuned version of [Quarkeen/distilbert-fake-news-detector](https://huggingface.co/Quarkeen/distilbert-fake-news-detector) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 335 | 0.5814 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
GroomerG/blockassist-bc-vicious_pawing_badger_1756461082
GroomerG
2025-08-29T10:19:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:19:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756462676
liukevin666
2025-08-29T10:19:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:18:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aXsalll/blockassist-bc-chattering_galloping_ape_1756462703
aXsalll
2025-08-29T10:19:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "chattering galloping ape", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:18:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - chattering galloping ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VoilaRaj/81_f_EP1Kuo
VoilaRaj
2025-08-29T10:17:37Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-29T10:17:07Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
hinoarashi/test4_act-policy-v1
hinoarashi
2025-08-29T10:16:09Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:hinoarashi/test4", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-29T10:15:56Z
--- datasets: hinoarashi/test4 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - robotics - lerobot --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
vendi11/blockassist-bc-placid_placid_llama_1756462520
vendi11
2025-08-29T10:16:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:15:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756462461
bah63843
2025-08-29T10:15:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:15:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
KritiBanka1204/llama_v1_2epoch
KritiBanka1204
2025-08-29T10:14:05Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:codellama/CodeLlama-7b-Instruct-hf", "lora", "transformers", "text-generation", "conversational", "arxiv:1910.09700", "base_model:codellama/CodeLlama-7b-Instruct-hf", "region:us" ]
text-generation
2025-08-29T10:14:01Z
--- base_model: codellama/CodeLlama-7b-Instruct-hf library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:codellama/CodeLlama-7b-Instruct-hf - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
DopeorNope/group_theory_merged_default_model
DopeorNope
2025-08-29T10:14:00Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T10:12:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
myfi/parser_model_ner_gemma_4b_v0.4_mini_g
myfi
2025-08-29T10:13:58Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "gemma3", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-3-4b-it", "base_model:finetune:unsloth/gemma-3-4b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T08:58:55Z
--- base_model: unsloth/gemma-3-4b-it tags: - text-generation-inference - transformers - unsloth - gemma3 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** myfi - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756462387
Ferdi3425
2025-08-29T10:13:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:13:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
gokuTheKing/blockassist-bc-iridescent_silent_butterfly_1756462279
gokuTheKing
2025-08-29T10:12:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "iridescent silent butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:12:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - iridescent silent butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756462203
bah63843
2025-08-29T10:11:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:10:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756462224
Ferdi3425
2025-08-29T10:10:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:10:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1756460643
chainway9
2025-08-29T10:10:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:10:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
K83Officiel/blockassist-bc-toothy_soaring_chinchilla_1756460523
K83Officiel
2025-08-29T10:10:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "toothy soaring chinchilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:10:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - toothy soaring chinchilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MOONUIOP/blockassist-bc-beaked_frisky_ox_1756462172
MOONUIOP
2025-08-29T10:09:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked frisky ox", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:09:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked frisky ox --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayan01/Phi3-14B-OH-SFT-2
Sayan01
2025-08-29T10:08:58Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T10:03:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AnerYubo/blockassist-bc-grazing_sly_hummingbird_1756462106
AnerYubo
2025-08-29T10:08:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grazing sly hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:08:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grazing sly hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yuan571/gemma-3-270M-finetune-0829-data4to64-r16-lora16
yuan571
2025-08-29T10:06:53Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:yuan571/gemma-3-270M-finetune-0829-data4to64-r16-lora16", "base_model:finetune:yuan571/gemma-3-270M-finetune-0829-data4to64-r16-lora16", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T09:09:52Z
--- base_model: yuan571/gemma-3-270M-finetune-0829-data4to64-r16-lora16 tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** yuan571 - **License:** apache-2.0 - **Finetuned from model :** yuan571/gemma-3-270M-finetune-0829-data4to64-r16-lora16 This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1756460556
helmutsukocok
2025-08-29T10:06:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:06:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aleebaster/blockassist-bc-sly_eager_boar_1756460384
aleebaster
2025-08-29T10:05:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-29T10:05:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ChandrilBasu/itcwoman
ChandrilBasu
2025-08-29T10:04:09Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-08-29T10:03:42Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/8.png text: '-' base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # itcwoman <Gallery /> ## Download model [Download](/ChandrilBasu/itcwoman/tree/main) them in the Files & versions tab.