modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-12 12:31:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-12 12:28:53
card
stringlengths
11
1.01M
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755616023
helmutsukocok
2025-08-19T15:33:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:33:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jiangnanboy/intelligent_document_recognition
jiangnanboy
2025-08-19T15:24:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-07-22T12:59:17Z
--- license: apache-2.0 --- ## intelligent document recognition ### Introduction Intelligent Document Recognition Desktop Software, used for OCR recognition and table structure recognition. It operates independently without the need for internet connection, ensuring data security. The results of OCR recognition can be saved in txt and html formats. The results of table structure recognition can be saved in html and excel formats. This software is available in two versions, one in Chinese and the other in English. ### version 2.0 Integrate OCR with table recognition. ### version2.1 Added features: 1. Screenshot 2. Images in the image list can be deleted ### version2.2 1. Support clearing the image list 3. Fix screenshot bug 5. Support dragging images to the image list https://github.com/jiangnanboy/intelligent_document_recognition
ucmp137538/best_RPT_coder_mathrl_ckpt-1000
ucmp137538
2025-08-19T15:22:35Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T15:19:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GaborMadarasz/AstroQA_mamba_epoch1_V6
GaborMadarasz
2025-08-19T15:22:16Z
0
0
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T15:22:02Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
huyhoangt2201/gte-multilingual-base-onnx
huyhoangt2201
2025-08-19T15:21:37Z
0
0
null
[ "onnx", "new", "custom_code", "region:us" ]
null
2025-08-19T15:18:31Z
# gte-multilingual-base (ONNX) This repository contains the **ONNX export** of [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base), a multilingual embedding model. The model was converted from Hugging Face Transformers to ONNX format for **faster and lighter inference** using [ONNX Runtime](https://onnxruntime.ai/). --- ## Model Details - **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) - **Format:** ONNX - **Opset:** 17 - **Quantization:** (if you applied `--quantize`, specify here, e.g. `QInt8`, `Float16`, otherwise `None`) --- ## Usage You can run inference with [onnxruntime](https://onnxruntime.ai/): ```python import onnxruntime as ort from transformers import AutoTokenizer # Load tokenizer tokenizer = AutoTokenizer.from_pretrained("your-username/gte-multilingual-base-onnx") # Load ONNX model session = ort.InferenceSession("model.onnx", providers=["CPUExecutionProvider"]) # Encode inputs text = "Hello, ONNX world!" inputs = tokenizer(text, return_tensors="np") # Run inference outputs = session.run(None, dict(inputs)) print("Model output:", outputs)
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_positive_dataset_v1
concept-unlearning
2025-08-19T15:21:07Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T15:18:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1755616585
zenqqq
2025-08-19T15:17:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "restless reptilian caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:17:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - restless reptilian caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Bahrom1996/whisper-uz
Bahrom1996
2025-08-19T15:16:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "uz", "dataset:common_voice_14_0", "base_model:jmshd/whisper-uz", "base_model:finetune:jmshd/whisper-uz", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-18T12:38:14Z
--- library_name: transformers language: - uz license: apache-2.0 base_model: jamshidahmadov/whisper-uz tags: - generated_from_trainer datasets: - common_voice_14_0 metrics: - wer model-index: - name: Whisper base uz - Bahrom results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_14_0 type: common_voice_14_0 config: uz split: test args: 'config: uz, split: test' metrics: - name: Wer type: wer value: 39.4953893762244 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper base uz - Bahrom This model is a fine-tuned version of [jamshidahmadov/whisper-uz](https://huggingface.co/jamshidahmadov/whisper-uz) on the common_voice_14_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4621 - Wer: 39.4954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.5759 | 0.1323 | 500 | 0.4621 | 39.4954 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.5.0 - Datasets 3.3.2 - Tokenizers 0.21.0
Muapi/sci-fi-sketch-style
Muapi
2025-08-19T15:16:34Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:16:21Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Sci-fi Sketch Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1262714@1443492", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
koloni/blockassist-bc-deadly_graceful_stingray_1755614936
koloni
2025-08-19T15:15:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:15:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/cogito-v2-preview-llama-405B-GGUF
mradermacher
2025-08-19T15:14:16Z
0
0
transformers
[ "transformers", "en", "base_model:deepcogito/cogito-v2-preview-llama-405B", "base_model:finetune:deepcogito/cogito-v2-preview-llama-405B", "license:llama3.1", "endpoints_compatible", "region:us" ]
null
2025-08-02T00:32:16Z
--- base_model: deepcogito/cogito-v2-preview-llama-405B language: - en library_name: transformers license: llama3.1 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/deepcogito/cogito-v2-preview-llama-405B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#cogito-v2-preview-llama-405B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part4of4) | Q2_K | 149.4 | | | [PART 1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part4of4) | Q3_K_S | 175.3 | | | [PART 1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part4of4) | Q3_K_M | 195.5 | lower quality | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part5of5) | Q3_K_L | 212.9 | | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part5of5) | IQ4_XS | 218.7 | | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part5of5) | Q4_K_S | 230.6 | fast, recommended | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part5of5) | Q4_K_M | 243.2 | fast, recommended | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part6of6) | Q5_K_S | 279.4 | | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part1of6) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part2of6) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part3of6) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part4of6) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part5of6) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part6of6) | Q5_K_M | 286.7 | | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part1of7) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part2of7) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part3of7) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part4of7) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part5of7) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part6of7) [P7](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part7of7) | Q6_K | 333.0 | very good quality | | [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part1of9) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part2of9) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part3of9) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part4of9) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part5of9) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part6of9) [P7](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part7of9) [P8](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part8of9) [P9](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part9of9) | Q8_0 | 431.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Muapi/alex-gross-style
Muapi
2025-08-19T15:09:51Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:09:38Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Alex Gross Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Alex Gross Style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:96381@1407451", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/geometric-ce
Muapi
2025-08-19T15:09:27Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:09:18Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Geometric - CE ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: gmtrcCE style, cubism, geometric, honeycomb, curvilinear ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:801170@895845", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/3d_flux-style
Muapi
2025-08-19T15:07:43Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:07:35Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # 3D_Flux Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: 3D01S , kawaii, anime ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:689478@771650", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/stippled-illustration-flux-lora
Muapi
2025-08-19T15:06:21Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:05:37Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Stippled Illustration (Flux LoRA) ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: STPPLD ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:772319@863812", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/josh-agle-shag-style
Muapi
2025-08-19T15:04:28Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:04:18Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Josh Agle (SHAG) Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Josh Agle (SHAG) Style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:103382@1616823", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
rbelanec/train_svamp_1755615499
rbelanec
2025-08-19T15:03:29Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "prefix-tuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-08-19T14:58:45Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - prefix-tuning - generated_from_trainer model-index: - name: train_svamp_1755615499 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_svamp_1755615499 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset. It achieves the following results on the evaluation set: - Loss: 0.1893 - Num Input Tokens Seen: 705184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 123 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:-----:|:----:|:---------------:|:-----------------:| | 0.7697 | 0.5 | 79 | 0.6681 | 35776 | | 0.5968 | 1.0 | 158 | 0.5173 | 70672 | | 0.1124 | 1.5 | 237 | 0.1794 | 105904 | | 0.132 | 2.0 | 316 | 0.1370 | 141328 | | 0.1259 | 2.5 | 395 | 0.1006 | 176752 | | 0.0482 | 3.0 | 474 | 0.0846 | 211808 | | 0.0378 | 3.5 | 553 | 0.1207 | 247104 | | 0.0761 | 4.0 | 632 | 0.0935 | 282048 | | 0.0108 | 4.5 | 711 | 0.1449 | 317248 | | 0.0208 | 5.0 | 790 | 0.1160 | 352592 | | 0.0152 | 5.5 | 869 | 0.1450 | 388176 | | 0.0132 | 6.0 | 948 | 0.1488 | 423184 | | 0.0151 | 6.5 | 1027 | 0.1474 | 458640 | | 0.0004 | 7.0 | 1106 | 0.1693 | 493440 | | 0.0006 | 7.5 | 1185 | 0.1817 | 528768 | | 0.0001 | 8.0 | 1264 | 0.1838 | 563872 | | 0.0 | 8.5 | 1343 | 0.1869 | 599232 | | 0.0002 | 9.0 | 1422 | 0.1876 | 634544 | | 0.0004 | 9.5 | 1501 | 0.1893 | 670064 | | 0.0001 | 10.0 | 1580 | 0.1893 | 705184 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
unitova/blockassist-bc-zealous_sneaky_raven_1755614105
unitova
2025-08-19T15:03:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:03:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755613987
vwzyrraz7l
2025-08-19T15:03:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:02:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Gynjn/iLRM
Gynjn
2025-08-19T15:02:54Z
0
1
pytorch
[ "pytorch", "image-to-3d", "arxiv:2507.23277", "license:mit", "region:us" ]
image-to-3d
2025-07-31T08:28:27Z
--- license: mit pipeline_tag: image-to-3d library_name: pytorch --- This repository contains the models of the paper [iLRM: An Iterative Large 3D Reconstruction Model](https://huggingface.co/papers/2507.23277). Project Page: https://gynjn.github.io/iLRM/
Muapi/figure-bust-premium-statue-bust-lora
Muapi
2025-08-19T15:02:26Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:02:19Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Figure Bust premium - Statue - Bust - LORA ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: actionFigureBustQuiron ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:813607@909788", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
DurstewitzLab/dynamix-3d-v1.0
DurstewitzLab
2025-08-19T15:02:25Z
0
1
null
[ "dynamix", "time-series-forecasting", "dataset:williamgilpin/dysts", "arxiv:2505.13192", "license:mit", "region:us" ]
time-series-forecasting
2025-08-19T13:37:35Z
--- license: mit pipeline_tag: time-series-forecasting datasets: - williamgilpin/dysts --- # DynaMix-3D v1.0 DynaMix is a foundation model for zero-shot inference of dynamical systems that preserves long-term statistics. Unlike traditional approaches that require retraining for each new system, DynaMix generalizes across dynamical systems by learning universal representations that capture the underlying patterns governing temporal evolution. - **Accurate Zero-Shot DSR**: DynaMix generalizes across diverse dynamical systems without fine-tuning, accurately capturing attractor geometry and long-term statistics. - **Context Felxible Dynamics Modeling**: The multivariate architecture captures dependencies across system dimensions and adapts flexibly to different dimensionalities and context lengths. - **Efficient and Lightweight**: Designed to be efficient, DynaMix can run on CPU for inference, enabling orders-of-magnitude faster inference than traditional foundation models. - **Interpretable Dynamics**: Provides insights into the structure of reconstructed systems, revealing similarities across different dynamical systems. - **General Time Series Forecasting**: Extends beyond DSR to general time series forecasting using adaptable embedding techniques. The paper can be found here: [![arXiv](https://img.shields.io/badge/arXiv-2505.13192-b31b1b.svg)](https://arxiv.org/abs/2505.13192) ## Model Description DynaMix is based on a sparse mixture of experts (MoE) architecture operating in latent space: 1. **Expert Networks**: Each expert is a specialized dynamical model, given through Almost-Linear Recurrent Neural Networks 2. **Gating Network**: Selects experts based on the provided context and current latent representation of the dynamics By aggregating the expert weighting with the expert prediction $z_t^i$ the next state is predicted. The model is lightweight (~10K parameters), making it orders-of-magnitude faster than traditional approaches while maintaining high accuracy in reconstructing complex dynamics. ## Usage To procuce predictions the model inputs a **Context tensor** as numpy array of shape `(T_C, S, N)` (where `T_C` is the context length, `S` the number of sewuences that should get processed and `N` the data dimensionality). The output is provided as **Reconstruction tensor** of shape `(T, S, N)` (where `T` is the predictions length) To load the model in python use: ```python import torch # Load the model model = torch.load("dynamix-3d-v1.0.safetensors") ``` Inference using python is done via the prediction pieline: ```python import torch from src.model.model_utilities import DynaMix_forecasting_pipeline # Make prediction with torch.no_grad(): # No gradient tracking needed for inference reconstruction = DynaMix_forecasting_pipeline( model=model, context=context_tensor, T=prediction_length, preprocessing_method="delay_embedding", standardize=True, ) ``` The forecasting pipeline requires the following inputs: - *model*: DynaMix foundation model. Model can be loaded using the `load_model` function from `src.utilities.utilities`. - *context*: Context data in the form of a tensor with shape ($T_C$, $S$, $N$) - *T*: Forecast horizon, i.e. an integer specifying how many future steps to forecast Optional arguments: - *preprocessing_method*: for time series forecasting, choose between `pos_embedding`, `delay_embedding`, `delay_embedding_random` and `zero_embedding` as preprocessing method (default: `zero_embedding`) - *standardize*: standardize data? `True`/`False` (default: `False`) - *initial_x*: Optional initial condition for the model as tensor of shape ($S$, $N$), else last context value is used (default: `None`) ## Citation If you use DynaMix in your research, please cite our paper: ``` @misc{hemmer2025truezeroshotinferencedynamical, title={True Zero-Shot Inference of Dynamical Systems Preserving Long-Term Statistics}, author={Christoph Jürgen Hemmer and Daniel Durstewitz}, year={2025}, eprint={2505.13192}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2505.13192}, } ``` For complete documentation and code, visit the [GitHub repository](https://github.com/yourusername/zero-shot-DSR).
2hpsatt/blockassist-bc-huge_deft_eagle_1755615679
2hpsatt
2025-08-19T15:02:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:01:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755614041
helmutsukocok
2025-08-19T15:01:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:01:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
climb-mao/spanish-babylm-urop-shivan
climb-mao
2025-08-19T15:01:31Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-18T11:07:13Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: spanish-babylm-urop-shivan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanish-babylm-urop-shivan This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.4505 | 1.0 | 2267 | 4.0353 | | 3.8921 | 2.0 | 4534 | 3.7753 | | 3.7193 | 3.0 | 6801 | 3.6895 | ### Framework versions - Transformers 4.55.2 - Pytorch 2.8.0.dev20250610+cu118 - Datasets 4.0.0 - Tokenizers 0.21.4
kiethuynhanh/gemma-3-1b-it-unsloth-bnb-4bit-legal-vn
kiethuynhanh
2025-08-19T15:01:12Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:57:37Z
--- base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** kiethuynhanh - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Bczerw/katex
Bczerw
2025-08-19T14:58:29Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-11T14:53:55Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Katex <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Bczerw/katex/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Bczerw/katex', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Bczerw/katex/discussions) to add images that show off what you’ve made with this LoRA.
lakelee/RLB_MLP_TSC_v2.20250819.17
lakelee
2025-08-19T14:58:20Z
0
0
transformers
[ "transformers", "safetensors", "timespan_contrastive", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-08-19T08:31:56Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: RLB_MLP_TSC_v2.20250819.17 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RLB_MLP_TSC_v2.20250819.17 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch_fused with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.55.2 - Pytorch 2.8.0+cu128 - Tokenizers 0.21.4
wheeler404/qwen2-tiny-lora
wheeler404
2025-08-19T14:58:15Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "base_model:wheeler404/qwen2-tiny", "base_model:finetune:wheeler404/qwen2-tiny", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-08-19T14:56:32Z
--- library_name: transformers license: mit base_model: - wheeler404/qwen2-tiny --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1755613952
michaelcpage345
2025-08-19T14:57:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature deadly anteater", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:57:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature deadly anteater --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/imax-70mm-cinematic-film-style-f1d-xl-sd1.5
Muapi
2025-08-19T14:57:36Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:57:27Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # IMAX 70mm cinematic film style F1D + XL + SD1.5 ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: cinematic film style, IMAX70mm , filmstrip border ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1249970@1409079", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
chainway9/blockassist-bc-untamed_quick_eel_1755613672
chainway9
2025-08-19T14:56:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:56:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF
xiaoxingop
2025-08-19T14:51:53Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-0.6B", "base_model:quantized:Qwen/Qwen3-0.6B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-19T14:51:49Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-0.6B tags: - llama-cpp - gguf-my-repo --- # xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-0.6B`](https://huggingface.co/Qwen/Qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-0.6B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo xiaoxingop/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -c 2048 ```
umairmaliick/falcon-7b-instruct-taskpro-lora
umairmaliick
2025-08-19T14:45:49Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:tiiuae/falcon-7b-instruct", "lora", "transformers", "text-generation", "conversational", "base_model:tiiuae/falcon-7b-instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-08-19T13:53:18Z
--- library_name: peft license: apache-2.0 base_model: tiiuae/falcon-7b-instruct tags: - base_model:adapter:tiiuae/falcon-7b-instruct - lora - transformers pipeline_tag: text-generation model-index: - name: falcon-7b-instruct-taskpro-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-7b-instruct-taskpro-lora This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2754 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 3.2923 | | No log | 2.0 | 2 | 3.2812 | | No log | 3.0 | 3 | 3.2754 | ### Framework versions - PEFT 0.17.0 - Transformers 4.55.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
carlosvillu/gemma2-9b-teacher-eval-nota-feedback
carlosvillu
2025-08-19T14:40:39Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-2-9b-it-bnb-4bit", "base_model:finetune:unsloth/gemma-2-9b-it-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:26:17Z
--- base_model: unsloth/gemma-2-9b-it-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** carlosvillu - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-9b-it-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lilTAT/blockassist-bc-gentle_rugged_hare_1755614412
lilTAT
2025-08-19T14:40:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:40:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Feruru/Classifier
Feruru
2025-08-19T14:36:48Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T14:35:49Z
--- license: apache-2.0 ---
aleebaster/blockassist-bc-sly_eager_boar_1755612564
aleebaster
2025-08-19T14:34:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:34:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Jekareka/test
Jekareka
2025-08-19T14:34:22Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T14:34:22Z
--- license: apache-2.0 ---
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755612232
pempekmangedd
2025-08-19T14:31:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "patterned sturdy dolphin", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:31:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - patterned sturdy dolphin --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Raheel0124/my_model
Raheel0124
2025-08-19T14:29:43Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T14:29:29Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Raheel0124 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unitova/blockassist-bc-zealous_sneaky_raven_1755612036
unitova
2025-08-19T14:27:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:27:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755611907
vwzyrraz7l
2025-08-19T14:25:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:25:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lilTAT/blockassist-bc-gentle_rugged_hare_1755613409
lilTAT
2025-08-19T14:23:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:23:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
saracandu/dummy
saracandu
2025-08-19T14:22:27Z
0
0
transformers
[ "transformers", "safetensors", "stldec", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-08-19T14:21:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Joetib/en-twi-qwen2.5-0.5B-Instruct
Joetib
2025-08-19T14:19:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:19:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755611487
hakimjustbao
2025-08-19T14:19:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:19:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kurosawama/Llama-3.2-3B-Instruct-Retranslation-align
Kurosawama
2025-08-19T14:19:15Z
0
0
transformers
[ "transformers", "safetensors", "trl", "dpo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T14:19:12Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johngreendr1/afc52410-2dbe-4af3-882b-efab851e5705
johngreendr1
2025-08-19T14:19:07Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Nexesenex/Llama_3.x_70b_Hexagon_Purple_V2", "base_model:adapter:Nexesenex/Llama_3.x_70b_Hexagon_Purple_V2", "region:us" ]
null
2025-08-19T14:18:52Z
--- base_model: Nexesenex/Llama_3.x_70b_Hexagon_Purple_V2 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
Kazuki1450/Qwen2.5-1.5B_lightr1_3_EN_1024_1p0_0p0_1p0_sft
Kazuki1450
2025-08-19T14:18:55Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-1.5B", "base_model:finetune:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:16:21Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B tags: - generated_from_trainer model-index: - name: Qwen2.5-1.5B_lightr1_3_EN_1024_1p0_0p0_1p0_sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-1.5B_lightr1_3_EN_1024_1p0_0p0_1p0_sft This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAFACTOR and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.55.2 - Pytorch 2.7.1+cu128 - Datasets 4.0.0 - Tokenizers 0.21.2
Kdch2597/ppo-LunarLander-v2
Kdch2597
2025-08-19T14:18:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-19T14:18:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.01 +/- 19.22 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Neurazum/Tbai-DPA-1.0
Neurazum
2025-08-19T14:17:43Z
0
1
transformers
[ "transformers", "safetensors", "text", "image", "brain", "dementia", "mri", "fmri", "health", "diagnosis", "diseases", "alzheimer", "parkinson", "comment", "doctor", "vbai", "tbai", "bai", "text-generation", "tr", "doi:10.57967/hf/5699", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
text-generation
2025-05-31T13:02:32Z
--- license: cc-by-nc-sa-4.0 language: - tr pipeline_tag: text-generation tags: - text - image - brain - dementia - mri - fmri - health - diagnosis - diseases - alzheimer - parkinson - comment - doctor - vbai - tbai - bai library_name: transformers --- # Tbai-DPA 1.0 Sürümü (TR) [BETA] ## Tanım Tbai-DPA 1.0 (Dementia, Parkinson, Alzheimer) modeli, MRI veya fMRI görüntüsü üzerinden beyin hastalıklarını yorumlayarak daha detaylı teşhis etmek amacıyla eğitilmiş ve geliştirilmiştir. Hastanın parkinson olup olmadığını, demans durumunu ve alzheimer riskini yüksek doğruluk oranı ile göstermektedir. ### Kitle / Hedef Tbai modelleri, Vbai ile birlikte çalışarak; öncelikle hastaneler, sağlık merkezleri ve bilim merkezleri için geliştirilmiştir. ### Sınıflar - **Alzheimer Hastası** - **Ortalama Alzheimer Riski** - **Hafif Alzheimer Riski** - **Çok Hafif Alzheimer Riski** - **Risk Yok** - **Parkinson Hastası** ## ---------------------------------------- # Tbai-DPA 1.0 Version (EN) [BETA] ## Description The Tbai-DPA 1.0 (Dementia, Parkinson's, Alzheimer's) model has been trained and developed to interpret brain diseases through MRI or fMRI images for more detailed diagnosis. It indicates whether the patient has Parkinson's disease, dementia, and Alzheimer's risk with a high accuracy rate. ### Audience / Target Tbai models, working in conjunction with Vbai, have been developed primarily for hospitals, health centers, and science centers. ### Classes - **Alzheimer's disease** - **Average Risk of Alzheimer's Disease** - **Mild Alzheimer's Risk** - **Very Mild Alzheimer's Risk** - **No Risk** - **Parkinson's Disease** ## Kullanım / Usage 1. Sanal ortam oluşturun. / Create a virtual environment. ```bash python -3.9.0 -m venv myenv ``` 2. Bağımlılıkları yükleyin. / Load dependencies. ```bash pip install -r requirements.txt ``` 3. Dosyayı çalıştırın. / Run the script. ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration import warnings warnings.filterwarnings("ignore", category=FutureWarning) warnings.filterwarnings("ignore", category=UserWarning) def load_tbai_model(model_dir: str, device): tokenizer = T5Tokenizer.from_pretrained(model_dir) model = T5ForConditionalGeneration.from_pretrained(model_dir).to(device) return tokenizer, model def generate_comment_sampling( tokenizer, model, sinif_adi: str, device, max_length: int = 128 ) -> str: input_text = f"Sınıf: {sinif_adi}" inputs = tokenizer( input_text, return_tensors="pt", padding="longest", truncation=True, max_length=32 ).to(device) out_ids = model.generate( **inputs, max_length=max_length, do_sample=True, top_k=50, top_p=0.95, no_repeat_ngram_size=2, early_stopping=True ) comment = tokenizer.decode(out_ids[0], skip_special_tokens=True) return comment def test_with_sampling(): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer, model = load_tbai_model( "Tbai/model/dir/path", device) test_classes = [ "alzheimer disease", "mild alzheimer risk", "moderate alzheimer risk", "very mild alzheimer risk", "no risk", "parkinson disease" ] for cls in test_classes: print(f"--- Class: {cls} (Deneme 1) ---") print(generate_comment_sampling(tokenizer, model, cls, device)) print(f"--- Class: {cls} (Deneme 2) ---") print(generate_comment_sampling(tokenizer, model, cls, device)) print() def main(): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f">>> Using device: {device}\n") model_dir = "Tbai/model/dir/path" tokenizer, model = load_tbai_model(model_dir, device) print(">>> Tokenizer ve model başarıyla yüklendi.\n") test_classes = [ "alzheimer disease", "mild alzheimer risk", "moderate alzheimer risk", "very mild alzheimer risk", "no risk", "parkinson disease" ] for cls in test_classes: generated = generate_comment_sampling(tokenizer, model, cls, device) print(f"Sınıf: {cls}") print(f"Üretilen Yorum: {generated}\n") if __name__ == "__main__": main() ``` 4. Görüntü İşleme Modeli ile Beraber Çalıştırın. / Run Together with Image Processing Model. ```python import os import time import torch import torch.nn as nn from torchvision import transforms from PIL import Image import matplotlib.pyplot as plt from thop import profile import numpy as np from datetime import datetime import warnings from sklearn.metrics import average_precision_score warnings.filterwarnings("ignore", category=FutureWarning) warnings.filterwarnings("ignore", category=UserWarning) from transformers import T5Tokenizer, T5ForConditionalGeneration class SimpleCNN(nn.Module): def __init__(self, model_type='c', num_classes=6): # Model tipine göre "model_type" değişkeni "f, c, q" olarak değiştirilebilir. / The ‘model_type’ variable can be changed to ‘f, c, q’ according to the model type. super(SimpleCNN, self).__init__() self.num_classes = num_classes if model_type == 'f': self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1) self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1) self.conv3 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1) self.fc1 = nn.Linear(64 * 28 * 28, 256) self.dropout = nn.Dropout(0.5) elif model_type == 'c': self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1) self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1) self.conv3 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1) self.fc1 = nn.Linear(128 * 28 * 28, 512) self.dropout = nn.Dropout(0.5) elif model_type == 'q': self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1) self.conv2 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1) self.conv3 = nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1) self.conv4 = nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1) self.fc1 = nn.Linear(512 * 14 * 14, 1024) self.dropout = nn.Dropout(0.5) self.fc2 = nn.Linear(self.fc1.out_features, num_classes) self.relu = nn.ReLU() self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) def forward(self, x): x = self.pool(self.relu(self.conv1(x))) x = self.pool(self.relu(self.conv2(x))) x = self.pool(self.relu(self.conv3(x))) if hasattr(self, 'conv4'): x = self.pool(self.relu(self.conv4(x))) x = x.view(x.size(0), -1) x = self.relu(self.fc1(x)) x = self.dropout(x) x = self.fc2(x) return x def predict_image(model: nn.Module, image_path: str, transform, device): img = Image.open(image_path).convert('RGB') inp = transform(img).unsqueeze(0).to(device) model.eval() with torch.no_grad(): out = model(inp) prob = torch.nn.functional.softmax(out, dim=1) pred = prob.argmax(dim=1).item() conf = prob[0, pred].item() * 100 return pred, conf, inp, prob def calculate_performance_metrics(model: nn.Module, device, input_size=(1, 3, 224, 224)): model.to(device) x = torch.randn(input_size).to(device) flops, params = profile(model, inputs=(x,), verbose=False) cpu_start = time.time() _ = model(x) cpu_time = (time.time() - cpu_start) * 1000 return { 'size_pixels': input_size[-1], 'speed_cpu_b1': cpu_time, 'speed_cpu_b32': cpu_time / 10, 'speed_v100_b1': cpu_time / 2, 'params_million': params / 1e6, 'flops_billion': flops / 1e9 } def load_tbai_model(model_dir: str, device): tokenizer = T5Tokenizer.from_pretrained(model_dir) model = T5ForConditionalGeneration.from_pretrained(model_dir).to(device) model.eval() return tokenizer, model def generate_comment_turkce(tokenizer, model, sinif_adi: str, device, max_length: int = 64) -> str: input_text = f"Sınıf: {sinif_adi}" inputs = tokenizer( input_text, return_tensors="pt", padding="longest", truncation=True, max_length=32 ).to(device) out_ids = model.generate( **inputs, max_length=max_length, do_sample=True, top_k=50, top_p=0.95, no_repeat_ngram_size=2, early_stopping=True ) comment = tokenizer.decode(out_ids[0], skip_special_tokens=True) return comment def save_monitoring_log(predicted_class, confidence, comment_text, metrics, class_names, image_path, ap_scores=None, map_score=None, log_path='monitoring_log.txt'): os.makedirs(os.path.dirname(log_path) or '.', exist_ok=True) timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S') img_name = os.path.basename(image_path) log = f""" ===== Model Monitoring Log ===== Timestamp: {timestamp} Image: {img_name} Predicted Class: {class_names[predicted_class]} Confidence: {confidence:.2f}% Comment: {comment_text} -- Performance Metrics -- Params (M): {metrics['params_million']:.2f} FLOPs (B): {metrics['flops_billion']:.2f} Image Size: {metrics['size_pixels']}x{metrics['size_pixels']} CPU Time b1 (ms): {metrics['speed_cpu_b1']:.2f} V100 Time b1 (ms): {metrics['speed_v100_b1']:.2f} V100 Time b32 (ms): {metrics['speed_cpu_b32']:.2f} -- AP/mAP Metrics --""" if ap_scores is not None and map_score is not None: log += f"\nmAP: {map_score:.4f}" for i, (class_name, ap) in enumerate(zip(class_names, ap_scores)): log += f"\nAP_{class_name}: {ap:.4f}" else: log += "\nAP/mAP: Not calculated (single image)" log += "\n================================\n" with open(log_path, 'a', encoding='utf-8') as f: f.write(log) def main(): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(device) transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) class_names = [ 'Alzheimer Disease', 'Mild Alzheimer Risk', 'Moderate Alzheimer Risk', 'Very Mild Alzheimer Risk', 'No Risk', 'Parkinson Disease' ] model = SimpleCNN(model_type='c', num_classes=len(class_names)).to(device) # Model tipine göre "model_type" değişkeni "f, c, q" olarak değiştirilebilir. / The ‘model_type’ variable can be changed to ‘f, c, q’ according to the model type. model_path = 'Vbai/model/file/path' try: model.load_state_dict(torch.load(model_path, map_location=device)) except Exception as e: print(f"Görüntü modeli yükleme hatası: {e}") return metrics = calculate_performance_metrics(model, device) tbai_model_dir = "Tbai/model/dir/path" tokenizer, tbai_model = load_tbai_model(tbai_model_dir, device) en2tr = { 'Alzheimer Disease': 'Alzheimer Hastalığı', 'Mild Alzheimer Risk': 'Hafif Alzheimer Riski', 'Moderate Alzheimer Risk': 'Orta Düzey Alzheimer Riski', 'Very Mild Alzheimer Risk': 'Çok Hafif Alzheimer Riski', 'No Risk': 'Risk Yok', 'Parkinson Disease': 'Parkinson Hastalığı' } image_path = 'test/images/path' pred_class_idx, confidence, inp_tensor, predicted_probs = predict_image(model, image_path, transform, device) predicted_class_name = class_names[pred_class_idx] print(f"Prediction: {predicted_class_name} ({confidence:.2f}%)") print(f"Confidence: {confidence:.2f}%") print(f"Params (M): {metrics['params_million']:.2f}") print(f"FLOPs (B): {metrics['flops_billion']:.2f}") print(f"Image Size: {metrics['size_pixels']}x{metrics['size_pixels']}") print(f"CPU Time b1 (ms): {metrics['speed_cpu_b1']:.2f}") print(f"V100 Time b1 (ms): {metrics['speed_v100_b1']:.2f}") print(f"V100 Time b32 (ms): {metrics['speed_cpu_b32']:.2f}") tr_class_name = en2tr.get(predicted_class_name, predicted_class_name) try: comment_text = generate_comment_turkce(tokenizer, tbai_model, tr_class_name, device) except Exception as e: print(f"Yorum üretme hatası: {e}") comment_text = "Yorum üretilemedi." print(f"\nComment (Tbai-DPA 1.0): {comment_text}") save_monitoring_log( pred_class_idx, confidence, comment_text, metrics, class_names, image_path) img_show = inp_tensor.squeeze(0).permute(1, 2, 0).cpu().numpy() mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) img_show = img_show * std + mean img_show_clipped = np.clip(img_show, 0.0, 1.0) plt.imshow(img_show_clipped) plt.title(f'{predicted_class_name} — {confidence:.2f}%') plt.axis('off') plt.show() if __name__ == '__main__': main() ``` #### Lisans/License: CC-BY-NC-SA-4.0
yaelahnal/blockassist-bc-mute_clawed_crab_1755612755
yaelahnal
2025-08-19T14:13:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:13:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755610957
koloni
2025-08-19T14:10:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:10:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EleutherAI/early-unlearning-strong-filtering-no-ga-lr-0_00012-gclip-1_0
EleutherAI
2025-08-19T14:08:08Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:06:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aldsouza/health-agent
aldsouza
2025-08-19T14:08:07Z
104
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "medical", "function-calling", "llm", "healthcare", "conversational-ai", "conversational", "en", "dataset:Salesforce/xlam-function-calling-60k", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-12T23:46:13Z
--- library_name: transformers tags: - medical - function-calling - llm - healthcare - conversational-ai license: mit datasets: - Salesforce/xlam-function-calling-60k language: - en base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B pipeline_tag: text-generation --- # Medical Function-Calling LLM (Fine-tuned DeepSeek-R1-Distill-Qwen-1.5B) This model is a fine-tuned version of **[deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)**, specialized for **medical domain function-calling** tasks. It is trained on **[Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k)** to reliably produce structured JSON outputs for healthcare applications such as appointment booking, medical record retrieval, patient communication, and medical triage support. --- ## Model Details - **Developed by:** Alton Lavin D’Souza - **Funded by:** Self-funded - **Model type:** Instruction-tuned causal language model with function-calling capabilities - **Language(s):** English - **License:** MIT - **Finetuned from model:** [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) ### Model Sources - **Repository:** [GitHub – Medical Function Calling LLM](https://github.com/) - **Base model card:** [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) --- ## Uses ### Direct Use - Conversational AI assistants in healthcare - Automated structured response generation in JSON - Integration with electronic health record (EHR) systems - Medical workflow automation (e.g., booking appointments, retrieving patient data) ### Downstream Use - Fine-tuning for specific healthcare specialties - Integration into clinical decision support systems - Agent-based medical AI systems with tool use ### Out-of-Scope Use - Direct diagnosis without human oversight - Emergency medical response without clinician involvement - General-purpose non-medical applications (may work but not optimized) --- ## Bias, Risks, and Limitations This model may: - Hallucinate medical facts if prompted outside its training scope - Produce incomplete JSON structures if instructions are ambiguous - Require strict validation before integration into real-world healthcare systems **⚠️ Important:** This model is **not** a substitute for a licensed medical professional. --- ## Tool Calling Example: ```python import json import os import pickle import time from datetime import datetime, timedelta ,time as time_1 from threading import Thread from typing import TypedDict, Dict, List, Any from urllib.request import Request import pytz import torch from duckduckgo_search import DDGS from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from langchain_community.tools import TavilySearchResults from langgraph.constants import START, END from langgraph.graph import StateGraph from regex import regex, search from smolagents import DuckDuckGoSearchTool from sympy.physics.units.definitions.dimension_definitions import information from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer from dotenv import load_dotenv from tzlocal import get_localzone load_dotenv() torch.manual_seed(11) model_name = "aldsouza/health-agent" pattern = r''' \{ # Opening brace of the function block \s*"name"\s*:\s*"([^"]+)"\s*, # Capture the function name \s*"arguments"\s*:\s*(\{ # Capture the arguments JSON object starting brace (?:[^{}]++ | (?2))*? # Recursive matching for balanced braces (PCRE syntax) \}) # Closing brace of arguments \s*\} # Closing brace of the function block ''' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to("cuda") # model_1 = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",torch_dtype=torch.float16).to("cuda") medical_tools = [ { "name": "symptom_checker", "description": "Analyze symptoms and provide possible conditions.", "parameters": { "symptoms": { "description": "List of symptoms reported by the patient.", "type": "list[str]", "default": ["headache", "fever"] } } }, { "name": "medication_lookup", "description": "Look up details about a medication by its name.", "parameters": { "medication_name": { "description": "Name of the medication to look up.", "type": "str", "default": "Aspirin" } } }, { "name": "book_appointment", "description": "Schedule a medical appointment with a doctor.", "parameters": { "patient_name": { "description": "Name of the patient.", "type": "str", "default": "John Doe" }, "doctor_specialty": { "description": "Specialty of the doctor to book.", "type": "str", "default": "general practitioner" }, "date": { "description": "Preferred date of appointment (YYYY-MM-DD).", "type": "str", "default": "2025-08-20" } } }, { "name": "get_lab_results", "description": "Retrieve lab test results for a patient by test ID.", "parameters": { "patient_id": { "description": "Unique patient identifier.", "type": "str", "default": "123456" }, "test_id": { "description": "Lab test identifier.", "type": "str", "default": "cbc" } } }, { "name": "request_missing_info", "description": "Ask the user for missing or incomplete information needed to fulfill their request.", "parameters": { "missing_fields": { "description": "List of missing required fields to be clarified by the user.", "type": "list[str]", "default": [] }, "context": { "description": "Optional context or explanation to help the user provide the missing information.", "type": "str", "default": "" } } }, { "name": "medical_device_info", "description": "Retrieve detailed information about a medical device by its name or model number.", "parameters": { "device_name": { "description": "The name or model number of the medical device to look up.", "type": "str", "default": "Blood Pressure Monitor" } } }, { "name": "record_blood_pressure", "description": "Record a patient's blood pressure reading with systolic, diastolic, and pulse rate values.", "parameters": { "patient_id": { "description": "Unique identifier of the patient.", "type": "str", "default": "123456" }, "systolic": { "description": "Systolic blood pressure value (mmHg).", "type": "int", "default": 120 }, "diastolic": { "description": "Diastolic blood pressure value (mmHg).", "type": "int", "default": 80 }, "pulse_rate": { "description": "Pulse rate in beats per minute.", "type": "int", "default": 70 }, "measurement_time": { "description": "Timestamp of the measurement (YYYY-MM-DD HH:MM).", "type": "str", "default": "2025-08-12 09:00" } } }, { "name": "start_blood_pressure_test", "description": "Initiate a blood pressure measurement test for a patient using a connected device.", "parameters": { "patient_id": { "description": "Unique identifier of the patient.", "type": "str", "default": "123456" }, "device_id": { "description": "Identifier or model of the blood pressure measuring device.", "type": "str", "default": "BP-Device-001" } } } ] # Compose the system prompt embedding the tools JSON system_prompt = f""" You are an intelligent AI assistant that uses available tools (functions) to help users achieve their medical-related goals. Your job is to understand the user's intent, identify missing information if needed, and then select and call the most appropriate function(s) to solve the task. # Rules: - ALWAYS use the tools provided to answer the user's request, unless explicitly told not to. - Ask clarifying questions ONLY if the user's request is ambiguous or lacks required input parameters. - If multiple tools are needed, use them in sequence. - DO NOT make up data or assume values — request any missing input clearly. # Output Format: - Respond using a JSON list of function calls in the following format: [ {{ "name": "function_name", "arguments": {{ "param1": "value1", "param2": "value2" }} ] - Only include the functions needed to complete the task. - If no function is needed or the input is unclear, ask a clarifying question instead of guessing. - Do NOT respond with explanations or natural language outside the JSON block unless explicitly instructed. Following are the tools provided to you: {json.dumps(medical_tools, indent=2)} """ SCOPES = ['https://www.googleapis.com/auth/calendar'] def symptom_checker(kwargs): print(f"Checking diseases for following symptoms on the web:") symptoms = kwargs.get("symptoms",[]) print(symptoms) for i, arg in enumerate(symptoms): print(f"{i}. {arg}") results = TavilySearchResults() information = "" for result in results.invoke(f"What causes {''.join(symptoms)}"): information = information + result["content"] + "\n" return { "status":200, "message":information } def medication_lookup(kwargs): medication_name = kwargs.get("medication_name") print(f"Looking up the web for information on {medication_name}....") results = TavilySearchResults() information = "" for result in results.invoke(f"What is {medication_name}?"): information = information + result["content"] + "\n" return { "status": 200, "message": information } def create_google_calendar_meeting( summary: str, start_datetime: str, end_datetime: str, attendees_emails: list, timezone: str = 'America/Chicago' ): """ Creates a Google Calendar event. Args: summary (str): Event title. start_datetime (str): Start datetime in ISO format, e.g., "2025-08-18T10:00:00-06:00". end_datetime (str): End datetime in ISO format. attendees_emails (list): List of attendee emails. timezone (str): Timezone string, default 'America/Chicago'. """ creds = None # Load saved credentials if available if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) # Authenticate if necessary if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES) creds = flow.run_local_server(port=0) with open('token.pickle', 'wb') as token: pickle.dump(creds, token) service = build('calendar', 'v3', credentials=creds) event = { 'summary': summary, 'location': 'Virtual / Google Meet', 'description': f'{summary} meeting.', 'start': {'dateTime': start_datetime, 'timeZone': timezone}, 'end': {'dateTime': end_datetime, 'timeZone': timezone}, 'attendees': [{'email': email} for email in attendees_emails], 'reminders': {'useDefault': True}, } created_event = service.events().insert( calendarId='primary', body=event, sendUpdates='all' ).execute() print(f"Event created: {created_event.get('htmlLink')}") return created_event def book_appointment(kwargs): patient_name = kwargs.get("patient_name") doctor_specialty = kwargs.get("doctor_specialty") date_str = kwargs.get("date") parsed_date = datetime.strptime(date_str, "%Y-%m-%d").date() # Default time 9:00 AM Mountain Time mountain_tz = pytz.timezone("America/Denver") dt_mt = datetime.combine(parsed_date, time_1(9, 0)) dt_mt = mountain_tz.localize(dt_mt) # Autodetect local timezone local_tz = get_localzone() dt_local = dt_mt.astimezone(local_tz) dt_local_end = dt_local + timedelta(hours=1) result = create_google_calendar_meeting( f"Meeting for {patient_name}", dt_local.isoformat(), dt_local_end.isoformat(), ["altondsouza02@gmail.com", "aldsouza@ualberta.ca"] ) return { "status":200, "message": f"Event Created:{result}" } function_execution_map = { "symptom_checker": symptom_checker, "medication_lookup": medication_lookup, "book_appointment": book_appointment } # Example prompt using the medical tools # messages = [ # { # "content": system_prompt, # "role": "system" # }, # { # "content": ( # "I have a headache and mild fever. What could be the possible conditions? " # "Also, lookup medication details for 'Ibuprofen'. " # "Please book an appointment for patient 'Alice Smith' with a neurologist on 2025-09-01." # ), # "role": "user" # } # ] # streamer = TextStreamer(tokenizer, skip_prompt=True) # streamer = TextIteratorStreamer(tokenizer, skip_prompt=True) # inputs = tokenizer.apply_chat_template( # messages, # add_generation_prompt=True, # tokenize=True, # return_dict=True, # return_tensors="pt", # ).to(model.device) # inputs = tokenizer.apply_chat_template( # messages, # add_generation_prompt=True, # tokenize=True, # return_dict=True, # return_tensors="pt", # ).to(mo) # generation_kwargs = dict(inputs,streamer=streamer, # max_new_tokens=4096, # temperature=0.7,) # thread = Thread(target=model.generate, kwargs=generation_kwargs,daemon=True) # thread.start() # for new_text in streamer: # print(new_text, end="") # with torch.no_grad(): # outputs = model.generate( # **inputs,streamer=streamer, # max_new_tokens=4096, # temperature=0.7, # ) class State(TypedDict): messages: List[Dict[str, Any]] plan: List[Dict[str, Any]] task: str graph_builder = StateGraph(State) PLANNING_AGENT = "PLANNING_AGENT" def planning(state: State): print("Coming up with Plan") messages = state.get("messages", []) inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) streamer = TextIteratorStreamer(tokenizer, skip_prompt=True) generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=4096, temperature=0.7, ) thread = Thread(target=model.generate, kwargs=generation_kwargs, daemon=True) thread.start() generated_text = "" for new_text in streamer: print(new_text, end="") generated_text = generated_text + new_text generated_text = generated_text.replace("<|end▁of▁sentence|>","").replace("</think>","") matches = regex.findall(pattern, generated_text, regex.VERBOSE) plan = state.get("plan", []) for i, (func_name, args_json) in enumerate(matches, 1): plan_entry = dict() plan_entry["function_name"] = func_name plan_entry["arguments"] = json.loads(args_json) plan.append(plan_entry) messages.append({"role": "assistant", "content": generated_text}) return {"messages":messages, "plan": plan} ROUTER = "ROUTER" def router(state: State): plan = state.get("plan", []) if len(plan) > 0: return "execute_plan" return "respond" def execute_plan(state: State): print("Executing") plan = state.get("plan", []) for plan_entry in plan: plan_entry["status"] = dict() print(f"Executing {plan_entry['function_name']} with details {plan_entry['arguments']}") print("Approve Execution?(y/n)") response = input() response = response.strip().lower() if response == "y": print("Approved.") if plan_entry["function_name"] in function_execution_map.keys(): function = function_execution_map[plan_entry["function_name"]] result = function(plan_entry["arguments"]) plan_entry["status"] = result else: print(f"Capability not implemented for {plan_entry['function_name']}") print("Done with task.") print("Proceeding with next.") elif response == "n": print("Not approved.") else: print("Invalid input, please enter 'y' or 'n'.") return {"plan": plan} def respond(state: State): print(state.get("messages")[-1]["content"]) return {"plan": state.get("plan")} def summarize(state: State): plan = state.get("plan") messages = state.get("messages") summary_prompt = [] summary_prompt.append({ "role": "user","content": f"Summarize the results obtained from the following tool executions:\n {json.dumps(plan,indent=2)}" }) inputs = tokenizer.apply_chat_template( summary_prompt, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) streamer = TextIteratorStreamer(tokenizer, skip_prompt=True) generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=4096, temperature=0.7, ) thread = Thread(target=model.generate, kwargs=generation_kwargs, daemon=True) thread.start() generated_text = "" for new_text in streamer: print(new_text, end="") generated_text = generated_text + new_text messages.append({"role": "assistant", "content": generated_text}) return {"messages":messages} EXECUTE_PLAN = "EXECUTE_PLAN" RESPOND = "RESPOND" SUMMARIZE = "SUMMARIZE" graph_builder.add_node(PLANNING_AGENT, planning) graph_builder.add_node(EXECUTE_PLAN, execute_plan) graph_builder.add_node(RESPOND, respond) graph_builder.add_node(SUMMARIZE, summarize) graph_builder.add_edge(START, PLANNING_AGENT) graph_builder.add_conditional_edges(PLANNING_AGENT, router, { "execute_plan": EXECUTE_PLAN, "respond": RESPOND }) graph_builder.add_edge(EXECUTE_PLAN, SUMMARIZE) graph_builder.add_edge(SUMMARIZE, RESPOND) graph_builder.add_edge(RESPOND, END) compiled_graph = graph_builder.compile() png_bytes = compiled_graph.get_graph().draw_mermaid_png() # Save to file with open("graph.png", "wb") as f: f.write(png_bytes) print("Graph saved as graph.png") messages = [ { "content": system_prompt, "role": "system" }, { "content": ( "I have a headache and mild fever. What could be the possible conditions? " "Also, lookup medication details for 'Ibuprofen'. " "Please book an appointment for patient 'Alice Smith' with a neurologist on 2025-08-18." ), "role": "user" } ] different_user_prompt = [ { "content": system_prompt, "role": "system" }, { "content": ( "My mother has chest pain and shortness of breath. " "Can you analyze her symptoms? " "Also, please look up information about 'Nitroglycerin' medication. " "Finally, get lab results for patient ID '987654' for the test 'lipid_panel'." ), "role": "user" } ] compiled_graph.invoke({"messages": messages}) # compiled_graph.invoke({"messages": different_user_prompt}) ``` ## Requirements ```python accelerate==1.9.0 aiohappyeyeballs==2.6.1 aiohttp==3.12.15 aiosignal==1.4.0 annotated-types==0.7.0 anyio==4.10.0 attrs==25.3.0 auto_gptq==0.7.1 autolab-core==1.1.1 beautifulsoup4==4.13.4 bitsandbytes==0.46.1 cachetools==5.5.2 certifi==2025.7.14 charset-normalizer==3.4.2 click==8.2.1 colorama==0.4.6 colorlog==6.9.0 contourpy==1.3.3 cycler==0.12.1 dataclasses-json==0.6.7 datasets==4.0.0 dateparser==1.2.2 ddgs==9.5.4 dill==0.3.8 dotenv==0.9.9 duckduckgo_search==8.1.1 duckling==1.8.0 filelock==3.13.1 fonttools==4.59.0 freetype-py==2.5.1 frozenlist==1.7.0 fsspec==2024.6.1 gekko==1.3.0 google-api-core==2.25.1 google-api-python-client==2.179.0 google-auth==2.40.3 google-auth-httplib2==0.2.0 google-auth-oauthlib==1.2.2 googleapis-common-protos==1.70.0 greenlet==3.2.4 h11==0.16.0 hf-xet==1.1.7 httpcore==1.0.9 httplib2==0.22.0 httpx==0.28.1 httpx-sse==0.4.1 huggingface-hub==0.34.3 idna==3.10 imageio==2.37.0 Jinja2==3.1.4 joblib==1.5.1 jpype1==1.6.0 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.25.0 jsonschema-specifications==2025.4.1 kiwisolver==1.4.8 langchain==0.3.27 langchain-community==0.3.27 langchain-core==0.3.74 langchain-huggingface==0.3.1 langchain-text-splitters==0.3.9 langgraph==0.6.5 langgraph-checkpoint==2.1.1 langgraph-prebuilt==0.6.4 langgraph-sdk==0.2.0 langsmith==0.4.14 lazy_loader==0.4 lxml==6.0.0 manifold3d==3.2.1 mapbox_earcut==1.0.3 markdown-it-py==3.0.0 markdownify==1.1.0 MarkupSafe==2.1.5 marshmallow==3.26.1 matplotlib==3.10.5 mdurl==0.1.2 mpmath==1.3.0 multidict==6.6.3 multiprocess==0.70.16 mypy_extensions==1.1.0 networkx==3.3 numpy==2.1.2 oauthlib==3.3.1 opencv-python==4.12.0.88 optimum==1.27.0 orjson==3.11.2 ormsgpack==1.10.0 packaging==25.0 pandas==2.3.1 peft==0.17.0 pillow==11.0.0 primp==0.15.0 propcache==0.3.2 proto-plus==1.26.1 protobuf==6.32.0 psutil==7.0.0 pyarrow==21.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycollada==0.9.2 pydantic==2.11.7 pydantic-settings==2.10.1 pydantic_core==2.33.2 pyglet==2.1.8 Pygments==2.19.2 PyOpenGL==3.1.0 pyparsing==3.2.3 pyreadline==2.1 pyrender==0.1.45 python-dateutil==2.9.0.post0 python-dotenv==1.1.1 pytz==2025.2 PyYAML==6.0.2 referencing==0.36.2 regex==2025.7.34 requests==2.32.4 requests-oauthlib==2.0.0 requests-toolbelt==1.0.0 rich==14.1.0 rouge==1.0.1 rpds-py==0.27.0 rsa==4.9.1 rtree==1.4.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 safetensors==0.5.3 scikit-image==0.25.2 scikit-learn==1.7.1 scipy==1.16.1 sentencepiece==0.2.1 setproctitle==1.3.6 shapely==2.1.1 six==1.17.0 smolagents==1.20.0 sniffio==1.3.1 soupsieve==2.7 SQLAlchemy==2.0.43 svg.path==7.0 sympy==1.13.3 tenacity==9.1.2 threadpoolctl==3.6.0 tifffile==2025.6.11 tokenizers==0.21.4 torch==2.7.1+cu126 torchaudio==2.7.1+cu126 torchvision==0.22.1+cu126 tqdm==4.67.1 transformers==4.54.1 trimesh==4.7.4 trl==0.20.0 typing-inspect==0.9.0 typing-inspection==0.4.1 typing_extensions==4.14.1 tzdata==2025.2 tzlocal==5.3.1 uritemplate==4.2.0 urllib3==2.5.0 vhacdx==0.0.8.post2 visualization==1.0.0 xxhash==3.5.0 yarl==1.20.1 zstandard==0.24.0 ``` ## How to Get Started ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch import json torch.manual_seed(42) model_name = "aldsouza/health-agent" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to("cuda") medical_tools = [ { "name": "symptom_checker", "description": "Analyze symptoms and provide possible conditions.", "parameters": { "symptoms": { "description": "List of symptoms reported by the patient.", "type": "list[str]", "default": ["headache", "fever"] } } }, { "name": "medication_lookup", "description": "Look up details about a medication by its name.", "parameters": { "medication_name": { "description": "Name of the medication to look up.", "type": "str", "default": "Aspirin" } } }, { "name": "book_appointment", "description": "Schedule a medical appointment with a doctor.", "parameters": { "patient_name": { "description": "Name of the patient.", "type": "str", "default": "John Doe" }, "doctor_specialty": { "description": "Specialty of the doctor to book.", "type": "str", "default": "general practitioner" }, "date": { "description": "Preferred date of appointment (YYYY-MM-DD).", "type": "str", "default": "2025-08-20" } } }, { "name": "get_lab_results", "description": "Retrieve lab test results for a patient by test ID.", "parameters": { "patient_id": { "description": "Unique patient identifier.", "type": "str", "default": "123456" }, "test_id": { "description": "Lab test identifier.", "type": "str", "default": "cbc" } } }, { "name": "request_missing_info", "description": "Ask the user for missing or incomplete information needed to fulfill their request.", "parameters": { "missing_fields": { "description": "List of missing required fields to be clarified by the user.", "type": "list[str]", "default": [] }, "context": { "description": "Optional context or explanation to help the user provide the missing information.", "type": "str", "default": "" } } }, { "name": "medical_device_info", "description": "Retrieve detailed information about a medical device by its name or model number.", "parameters": { "device_name": { "description": "The name or model number of the medical device to look up.", "type": "str", "default": "Blood Pressure Monitor" } } }, { "name": "record_blood_pressure", "description": "Record a patient's blood pressure reading with systolic, diastolic, and pulse rate values.", "parameters": { "patient_id": { "description": "Unique identifier of the patient.", "type": "str", "default": "123456" }, "systolic": { "description": "Systolic blood pressure value (mmHg).", "type": "int", "default": 120 }, "diastolic": { "description": "Diastolic blood pressure value (mmHg).", "type": "int", "default": 80 }, "pulse_rate": { "description": "Pulse rate in beats per minute.", "type": "int", "default": 70 }, "measurement_time": { "description": "Timestamp of the measurement (YYYY-MM-DD HH:MM).", "type": "str", "default": "2025-08-12 09:00" } } }, { "name": "start_blood_pressure_test", "description": "Initiate a blood pressure measurement test for a patient using a connected device.", "parameters": { "patient_id": { "description": "Unique identifier of the patient.", "type": "str", "default": "123456" }, "device_id": { "description": "Identifier or model of the blood pressure measuring device.", "type": "str", "default": "BP-Device-001" } } } ] # Compose the system prompt embedding the tools JSON system_prompt = f""" You are an intelligent AI assistant that uses available tools (functions) to help users achieve their medical-related goals. Your job is to understand the user's intent, identify missing information if needed, and then select and call the most appropriate function(s) to solve the task. # Rules: - ALWAYS use the tools provided to answer the user's request, unless explicitly told not to. - Ask clarifying questions ONLY if the user's request is ambiguous or lacks required input parameters. - If multiple tools are needed, use them in sequence. - DO NOT make up data or assume values — request any missing input clearly. # Output Format: - Respond using a JSON list of function calls in the following format: [ {{ "name": "function_name", "arguments": {{ "param1": "value1", "param2": "value2" }} ] - Only include the functions needed to complete the task. - If no function is needed or the input is unclear, ask a clarifying question instead of guessing. - Do NOT respond with explanations or natural language outside the JSON block unless explicitly instructed. Following are the tools provided to you: {json.dumps(medical_tools, indent=2)} """ # Example prompt using the medical tools messages = [ { "content": system_prompt, "role": "system" }, { "content": ( "I have a headache and mild fever. What could be the possible conditions? " "Also, lookup medication details for 'Ibuprofen'. " "Please book an appointment for patient 'Alice Smith' with a neurologist on 2025-09-01." ), "role": "user" } ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=4096, temperature=0.7, ) response = tokenizer.decode(outputs[0]) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
yaelahnal/blockassist-bc-mute_clawed_crab_1755612368
yaelahnal
2025-08-19T14:07:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:07:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
0xZeno/sdxl-base-1.0-wharton-footballer-optimized-loss
0xZeno
2025-08-19T14:06:53Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-08-19T10:40:12Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a photo of sks football player widget: - text: a photo of sks football player playing football on a beach output: url: image_0.png - text: a photo of sks football player playing football on a beach output: url: image_1.png - text: a photo of sks football player playing football on a beach output: url: image_2.png - text: a photo of sks football player playing football on a beach output: url: image_3.png tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - 0xZeno/sdxl-base-1.0-wharton-footballer-optimized-loss <Gallery /> ## Model description These are 0xZeno/sdxl-base-1.0-wharton-footballer-optimized-loss LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks football player to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](0xZeno/sdxl-base-1.0-wharton-footballer-optimized-loss/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755610573
sampingkaca72
2025-08-19T14:02:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:02:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/deadpool-flux1.d-sdxl
Muapi
2025-08-19T14:00:54Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:00:47Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Deadpool - Flux1.D & SDXL ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: deadpool in a bodysuit and a mask ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:195606@728769", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/1980-s-style-xl-f1d
Muapi
2025-08-19T14:00:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:00:26Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # 1980's style XL + F1D ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: 1980 style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:376914@894083", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
lilTAT/blockassist-bc-gentle_rugged_hare_1755611973
lilTAT
2025-08-19T14:00:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:59:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fengpeisheng1/mergekit-slerp-iskhcfu-IQ4_NL-GGUF
fengpeisheng1
2025-08-19T14:00:00Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:fengpeisheng1/mergekit-slerp-iskhcfu", "base_model:quantized:fengpeisheng1/mergekit-slerp-iskhcfu", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-19T13:59:39Z
--- base_model: fengpeisheng1/mergekit-slerp-iskhcfu library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # fengpeisheng1/mergekit-slerp-iskhcfu-IQ4_NL-GGUF This model was converted to GGUF format from [`fengpeisheng1/mergekit-slerp-iskhcfu`](https://huggingface.co/fengpeisheng1/mergekit-slerp-iskhcfu) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/fengpeisheng1/mergekit-slerp-iskhcfu) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo fengpeisheng1/mergekit-slerp-iskhcfu-IQ4_NL-GGUF --hf-file mergekit-slerp-iskhcfu-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo fengpeisheng1/mergekit-slerp-iskhcfu-IQ4_NL-GGUF --hf-file mergekit-slerp-iskhcfu-iq4_nl-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo fengpeisheng1/mergekit-slerp-iskhcfu-IQ4_NL-GGUF --hf-file mergekit-slerp-iskhcfu-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo fengpeisheng1/mergekit-slerp-iskhcfu-IQ4_NL-GGUF --hf-file mergekit-slerp-iskhcfu-iq4_nl-imat.gguf -c 2048 ```
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755610342
ihsanridzi
2025-08-19T13:59:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry flexible owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:59:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry flexible owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/neon-cyberpunk-animals-flux-sdxl
Muapi
2025-08-19T13:56:12Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:56:03Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Neon Cyberpunk - Animals FLUX & SDXL ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: mad-cybranmls, cybernetic parts, mechanical parts ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:281944@1067893", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755610017
pempekmangedd
2025-08-19T13:55:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "patterned sturdy dolphin", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:55:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - patterned sturdy dolphin --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755610110
helmutsukocok
2025-08-19T13:54:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:54:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cyt9772/my-bert-fine-tuned1
cyt9772
2025-08-19T13:51:30Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T13:50:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alok0777/blockassist-bc-masked_pensive_lemur_1755611305
alok0777
2025-08-19T13:50:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked pensive lemur", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:49:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked pensive lemur --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/the-deep-abyss-flux
Muapi
2025-08-19T13:50:48Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:50:36Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # The Deep Abyss FLUX ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: 4byss ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:930359@1041413", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/ethereal-fantasy
Muapi
2025-08-19T13:50:26Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:50:13Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Ethereal Fantasy ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1017670@1141054", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/kodak-vision3-500t-analog-film-stocks-footage-f1d-xl
Muapi
2025-08-19T13:48:22Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:48:08Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Kodak Vision3 500T analog film stocks Footage F1D + XL ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Kodak Vision3 500T, analog film stocks ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:725625@876689", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
mang3dd/blockassist-bc-tangled_slithering_alligator_1755609477
mang3dd
2025-08-19T13:45:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:44:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ShuaiYang03/GR00T-N1.5-Lerobot-SimplerEnv-BridgeV2
ShuaiYang03
2025-08-19T13:43:00Z
5
1
null
[ "safetensors", "gr00t_n1_5", "dataset:IPEC-COMMUNITY/bridge_orig_lerobot", "base_model:nvidia/GR00T-N1.5-3B", "base_model:finetune:nvidia/GR00T-N1.5-3B", "region:us" ]
null
2025-08-12T06:47:55Z
--- datasets: - IPEC-COMMUNITY/bridge_orig_lerobot base_model: - nvidia/GR00T-N1.5-3B --- --- datasets: - IPEC-COMMUNITY/bridge_orig_lerobot base_model: - nvidia/GR00T-N1.5-3B --- A fine-tuned GR00T model on the Bridge dataset(30k steps, 8 A100 GPUs) follows the default fine-tuning settings (i.e., freezing the VLM backbone). The evaluation was conducted using the SimplerEnv-OpenVLA repository (https://github.com/DelinQu/SimplerEnv-OpenVLA), with thanks to their contributions to the community. This fine-tuned model should not be considered representative of the GR00T's actual performance. | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |:---------------------------------------------------|:--------------------|:----------------|:----------|:-------|:-------|:----------|:-----------|:------------|:--------|:--------| | put_spoon_on_tablecloth/matching_partial | 0.8333333333333334 | nan | nan | 0.167 | nan | 0.347 | 0.778 | nan | 0.041 | 0.375 | | put_spoon_on_tablecloth/matching_entire | 0.625 | nan | nan | 0.0 | nan | 0.125 | 0.472 | nan | 0.0 | 0.208 | | put_carrot_on_plate/matching_partial | 0.5416666666666666 | nan | nan | 0.208 | nan | 0.528 | 0.278 | nan | 0.333 | 0.333 | | put_carrot_on_plate/matching_entire | 0.4583333333333333 | nan | nan | 0.042 | nan | 0.083 | 0.097 | nan | 0.0 | 0.25 | | stack_green_block_on_yellow_block/matching_partial | 0.7083333333333334 | nan | nan | 0.083 | nan | 0.319 | 0.403 | nan | 0.125 | 0.083 | | stack_green_block_on_yellow_block/matching_entire | 0.16666666666666666 | nan | nan | 0.0 | nan | 0.0 | 0.042 | nan | 0.0 | 0.083 | | put_eggplant_in_basket/matching_partial | 0.4166666666666667 | nan | nan | 0.0 | nan | 0.667 | 0.875 | nan | 0.083 | 0.0 | | put_eggplant_in_basket/matching_entire | 0.20833333333333334 | nan | nan | 0.0 | nan | 0.431 | 0.569 | nan | 0.041 | 0.0 | | ckpt_name | GR00T-N1.5 | RT-1(Converged) | RT-1(15%) | RT-1-X | RT-2-X | Octo-Base | Octo-Small | RT-1(begin) | OpenVLA | RoboVLM | Data configuration: In addition to adding the following code to `data_config.py`, I also provide the `modality.json`, which is required for the GR00T dataloader. ```python class FractalDataConfig(So100DataConfig): video_keys = ["video.image", ] state_keys = ["state.x", "state.y", "state.z", "state.rx", "state.ry", "state.rz", "state.rw", "state.gripper"] action_keys = ["action.x", "action.y", "action.z", "action.roll", "action.pitch", "action.yaw", "action.gripper"] language_keys = ["annotation.human.action.task_description"] def transform(self) -> ModalityTransform: transforms = [ # video transforms VideoToTensor(apply_to=self.video_keys), VideoCrop(apply_to=self.video_keys, scale=0.95), VideoResize(apply_to=self.video_keys, height=224, width=224, interpolation="linear"), VideoColorJitter( apply_to=self.video_keys, brightness=0.3, contrast=0.4, saturation=0.5, hue=0.08, ), VideoToNumpy(apply_to=self.video_keys), # state transforms StateActionToTensor(apply_to=self.state_keys), StateActionTransform( apply_to=self.state_keys, normalization_modes={key: "min_max" for key in self.state_keys}, ), # action transforms StateActionToTensor(apply_to=self.action_keys), StateActionTransform( apply_to=self.action_keys, normalization_modes={key: "min_max" for key in self.action_keys}, ), # concat transforms ConcatTransform( video_concat_order=self.video_keys, state_concat_order=self.state_keys, action_concat_order=self.action_keys, ), # model-specific transform GR00TTransform( state_horizon=len(self.observation_indices), action_horizon=len(self.action_indices), max_state_dim=64, max_action_dim=32, ), ] return ComposedModalityTransform(transforms=transforms) class BridgeDataConfig(FractalDataConfig): video_keys = ["video.image_0", ] state_keys = ["state.x", "state.y", "state.z", "state.roll", "state.pitch", "state.yaw", "state.pad", "state.gripper"] action_keys = ["action.x", "action.y", "action.z", "action.roll", "action.pitch", "action.yaw", "action.gripper"] language_keys = ["annotation.human.action.task_description"] ``` Extra embodiment tag to reproduce the results. ```python class EmbodimentTag(Enum): OXE = 'oxe' # Embodiment tag string: to projector index in the Action Expert Module EMBODIMENT_TAG_MAPPING = { EmbodimentTag.OXE.value: 7, } ``` Thanks to @youliangtan, who reevaluated my results. https://huggingface.co/ShuaiYang03/GR00T-N1.5-Lerobot-SimplerEnv-BridgeV2/discussions/1 https://github.com/NVIDIA/Isaac-GR00T : with the commit hash aa6441feb4f08233d55cbfd2082753cdc01fa676 With the modified SimplerEnv : https://github.com/youliangtan/SimplerEnv
lilTAT/blockassist-bc-gentle_rugged_hare_1755610875
lilTAT
2025-08-19T13:41:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:41:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755608966
koloni
2025-08-19T13:36:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:36:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Mostefa-Terbeche/diabetic-retinopathy-messidor-resnet50-gentle-20250619-233154
Mostefa-Terbeche
2025-08-19T13:36:21Z
0
0
null
[ "diabetic-retinopathy", "medical-imaging", "pytorch", "computer-vision", "retinal-imaging", "dataset:messidor", "license:apache-2.0", "model-index", "region:us" ]
null
2025-08-19T12:47:08Z
--- license: apache-2.0 tags: - diabetic-retinopathy - medical-imaging - pytorch - computer-vision - retinal-imaging datasets: - messidor metrics: - accuracy - quadratic-kappa - auc model-index: - name: messidor_resnet50_gentle results: - task: type: image-classification name: Diabetic Retinopathy Classification dataset: type: messidor name: MESSIDOR metrics: - type: accuracy value: 0.28160919540229884 - type: quadratic-kappa value: 0.5726915520628684 --- # Diabetic Retinopathy Classification Model ## Model Description This model is trained for diabetic retinopathy classification using the resnet50 architecture on the messidor dataset with gentle preprocessing. ## Model Details - **Architecture**: resnet50 - **Dataset**: messidor - **Preprocessing**: gentle - **Training Date**: 20250619-233154 - **Task**: 5-class diabetic retinopathy grading (0-4) - **Directory**: messidor_resnet50_20250619-233154_new ## Performance - **Test Accuracy**: 0.28160919540229884 - **Test Quadratic Kappa**: 0.5726915520628684 - **Validation Kappa**: 0.5726915520628684 ## Usage ```python import torch from huggingface_hub import hf_hub_download # Download model model_path = hf_hub_download( repo_id="your-username/diabetic-retinopathy-messidor-resnet50-gentle", filename="model_best.pt" ) # Load model model = torch.load(model_path, map_location='cpu') ``` ## Classes - 0: No DR (No diabetic retinopathy) - 1: Mild DR (Mild non-proliferative diabetic retinopathy) - 2: Moderate DR (Moderate non-proliferative diabetic retinopathy) - 3: Severe DR (Severe non-proliferative diabetic retinopathy) - 4: Proliferative DR (Proliferative diabetic retinopathy) ## Citation If you use this model, please cite your research paper/thesis.
Whitesmasher/Wan22Testing
Whitesmasher
2025-08-19T13:36:09Z
0
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2025-08-19T12:54:32Z
--- license: apache-2.0 ---
lilTAT/blockassist-bc-gentle_rugged_hare_1755610214
lilTAT
2025-08-19T13:30:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:30:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/ethereal-gothic-sd1-sdxl-pony-flux
Muapi
2025-08-19T13:28:48Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:28:38Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Ethereal Gothic (SD1, SDXL, Pony, Flux) ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ArsMJStyle, Etherial Gothic ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1072957@1204428", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/cinematic-text-title-film-cover-on-screen-style-xl-f1d
Muapi
2025-08-19T13:28:24Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:28:11Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Cinematic text title + Film Cover (on screen) style XL + F1D ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: perfect text title style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:520481@893826", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
wheeler404/qwen2-tiny
wheeler404
2025-08-19T13:27:50Z
231
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-05T13:54:43Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> A tiny test model with Qwen2.5 architecture ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Muapi/sony-mavica-mvc-fd7-real-digicam
Muapi
2025-08-19T13:27:07Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:26:59Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Sony Mavica MVC-FD7 (Real digicam) ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: m8vic2 ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1147127@1290161", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
thanobidex/blockassist-bc-colorful_shiny_hare_1755608295
thanobidex
2025-08-19T13:26:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:26:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ChevalierJoseph/X1
ChevalierJoseph
2025-08-19T13:25:14Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T13:19:26Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** ChevalierJoseph - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yaelahnal/blockassist-bc-mute_clawed_crab_1755609838
yaelahnal
2025-08-19T13:25:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:24:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF
Ransss
2025-08-19T13:24:30Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Vortex5/Mystic-Rune-v2-12B", "base_model:quantized:Vortex5/Mystic-Rune-v2-12B", "endpoints_compatible", "region:us" ]
null
2025-08-19T13:23:35Z
--- base_model: Vortex5/Mystic-Rune-v2-12B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF This model was converted to GGUF format from [`Vortex5/Mystic-Rune-v2-12B`](https://huggingface.co/Vortex5/Mystic-Rune-v2-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Vortex5/Mystic-Rune-v2-12B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF --hf-file mystic-rune-v2-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF --hf-file mystic-rune-v2-12b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF --hf-file mystic-rune-v2-12b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Ransss/Mystic-Rune-v2-12B-Q8_0-GGUF --hf-file mystic-rune-v2-12b-q8_0.gguf -c 2048 ```
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755607796
milliarderdol
2025-08-19T13:22:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring rough scorpion", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:22:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring rough scorpion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/animals-in-human-activities-flux-sdxl
Muapi
2025-08-19T13:22:31Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:22:21Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Animals in human activities Flux/SDXL ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:593514@772422", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
mradermacher/Qwen3_Medical_GRPO-i1-GGUF
mradermacher
2025-08-19T13:22:18Z
791
2
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3", "medical", "en", "zh", "dataset:FreedomIntelligence/medical-o1-reasoning-SFT", "dataset:lastmass/medical-o1-reasoning-SFT-keywords", "base_model:lastmass/Qwen3_Medical_GRPO", "base_model:quantized:lastmass/Qwen3_Medical_GRPO", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-07-24T16:02:43Z
--- base_model: lastmass/Qwen3_Medical_GRPO datasets: - FreedomIntelligence/medical-o1-reasoning-SFT - lastmass/medical-o1-reasoning-SFT-keywords language: - en - zh library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen3 - medical --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/lastmass/Qwen3_Medical_GRPO <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3_Medical_GRPO-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF/resolve/main/Qwen3_Medical_GRPO.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
lilTAT/blockassist-bc-gentle_rugged_hare_1755609552
lilTAT
2025-08-19T13:19:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:19:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755607889
vwzyrraz7l
2025-08-19T13:18:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:18:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AllisonPierce94/vit-rps-out
AllisonPierce94
2025-08-19T13:16:54Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-19T03:51:00Z
--- library_name: transformers tags: - generated_from_trainer datasets: - imagefolder model-index: - name: vit-rps-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-rps-out This model was trained from scratch on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.55.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
lilTAT/blockassist-bc-gentle_rugged_hare_1755609232
lilTAT
2025-08-19T13:14:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:14:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1755607453
chainway9
2025-08-19T13:13:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:13:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755607670
lisaozill03
2025-08-19T13:12:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:12:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755609008
canoplos112
2025-08-19T13:12:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping sleek squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:10:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping sleek squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Growcompany/flan-t5-base-Q4_K_M-GGUF
Growcompany
2025-08-19T13:08:57Z
0
0
null
[ "gguf", "text2text-generation", "llama-cpp", "gguf-my-repo", "en", "fr", "ro", "de", "multilingual", "dataset:svakulenk0/qrecc", "dataset:taskmaster2", "dataset:djaym7/wiki_dialog", "dataset:deepmind/code_contests", "dataset:lambada", "dataset:gsm8k", "dataset:aqua_rat", "dataset:esnli", "dataset:quasc", "dataset:qed", "base_model:google/flan-t5-base", "base_model:quantized:google/flan-t5-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T13:08:54Z
--- language: - en - fr - ro - de - multilingual tags: - text2text-generation - llama-cpp - gguf-my-repo widget: - text: 'Translate to German: My name is Arthur' example_title: Translation - text: Please answer to the following question. Who is going to be the next Ballon d'or? example_title: Question Answering - text: 'Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering.' example_title: Logical reasoning - text: Please answer the following question. What is the boiling point of Nitrogen? example_title: Scientific knowledge - text: Answer the following yes/no question. Can you write a whole Haiku in a single tweet? example_title: Yes/no question - text: Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet? example_title: Reasoning task - text: 'Q: ( False or not False or False ) is? A: Let''s think step by step' example_title: Boolean Expressions - text: The square root of x is the cube root of y. What is y to the power of 2, if x = 4? example_title: Math reasoning - text: 'Premise: At my age you will probably have learnt one lesson. Hypothesis: It''s not certain how many lessons you''ll learn by your thirties. Does the premise entail the hypothesis?' example_title: Premise and hypothesis datasets: - svakulenk0/qrecc - taskmaster2 - djaym7/wiki_dialog - deepmind/code_contests - lambada - gsm8k - aqua_rat - esnli - quasc - qed license: apache-2.0 base_model: google/flan-t5-base --- # Growcompany/flan-t5-base-Q4_K_M-GGUF This model was converted to GGUF format from [`google/flan-t5-base`](https://huggingface.co/google/flan-t5-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/flan-t5-base) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Growcompany/flan-t5-base-Q4_K_M-GGUF --hf-file flan-t5-base-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Growcompany/flan-t5-base-Q4_K_M-GGUF --hf-file flan-t5-base-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Growcompany/flan-t5-base-Q4_K_M-GGUF --hf-file flan-t5-base-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Growcompany/flan-t5-base-Q4_K_M-GGUF --hf-file flan-t5-base-q4_k_m.gguf -c 2048 ```
Muapi/flux-film-foto
Muapi
2025-08-19T13:06:27Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:06:11Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Flux Film Foto ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: flmft photo style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:646458@723205", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
koloni/blockassist-bc-deadly_graceful_stingray_1755606803
koloni
2025-08-19T13:02:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:01:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).