modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-13 00:37:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-13 00:35:18
card
stringlengths
11
1.01M
opentargets/locus_to_gene_25.09
opentargets
2025-08-19T16:12:41Z
0
0
sklearn
[ "sklearn", "skops", "tabular-classification", "region:us" ]
tabular-classification
2025-08-19T16:12:38Z
--- library_name: sklearn tags: - sklearn - skops - tabular-classification model_format: skops model_file: classifier.skops widget: - structuredData: credibleSetConfidence: - 0.75 - 0.75 - 0.25 distanceFootprintMean: - 1.0 - 1.0 - 0.9948455095291138 distanceFootprintMeanNeighbourhood: - 1.0 - 1.0 - 1.0 distanceSentinelFootprint: - 1.0 - 1.0 - 0.9999213218688965 distanceSentinelFootprintNeighbourhood: - 1.0 - 1.0 - 1.0 distanceSentinelTss: - 0.9982281923294067 - 0.9999350309371948 - 0.9999213218688965 distanceSentinelTssNeighbourhood: - 1.0 - 1.0 - 1.0 distanceTssMean: - 0.9982281923294067 - 0.9999350309371948 - 0.9947366714477539 distanceTssMeanNeighbourhood: - 1.0 - 1.0 - 1.0 eQtlColocClppMaximum: - 0.949999988079071 - 0.0 - 0.06608512997627258 eQtlColocClppMaximumNeighbourhood: - 1.0 - 0.0 - 1.0 eQtlColocH4Maximum: - 1.0 - 0.0 - 0.0 eQtlColocH4MaximumNeighbourhood: - 1.0 - 0.0 - 0.0 geneCount500kb: - 20.0 - 15.0 - 8.0 geneId: - ENSG00000087237 - ENSG00000169174 - ENSG00000084674 goldStandardSet: - 1 - 1 - 1 pQtlColocClppMaximum: - 0.0 - 1.0 - 0.0 pQtlColocClppMaximumNeighbourhood: - 0.0 - 1.0 - 0.0 pQtlColocH4Maximum: - 0.0 - 1.0 - 0.0 pQtlColocH4MaximumNeighbourhood: - 0.0 - 1.0 - 0.0 proteinGeneCount500kb: - 8.0 - 7.0 - 3.0 sQtlColocClppMaximum: - 0.949999988079071 - 0.0 - 0.21970131993293762 sQtlColocClppMaximumNeighbourhood: - 1.0 - 0.0 - 1.0 sQtlColocH4Maximum: - 1.0 - 0.0 - 0.0 sQtlColocH4MaximumNeighbourhood: - 1.0 - 0.0 - 0.0 studyLocusId: - 005bc8624f8dd7f7c7bc63e651e9e59d - 02c442ea4fa5ab80586a6d1ff6afa843 - 235e8ce166619f33e27582fff5bc0c94 vepMaximum: - 0.33000001311302185 - 0.6600000262260437 - 0.6600000262260437 vepMaximumNeighbourhood: - 1.0 - 1.0 - 1.0 vepMean: - 0.33000001311302185 - 0.6600000262260437 - 0.0039977929554879665 vepMeanNeighbourhood: - 1.0 - 1.0 - 1.0 --- # Model description The locus-to-gene (L2G) model derives features to prioritise likely causal genes at each GWAS locus based on genetic and functional genomics features. The main categories of predictive features are: - Distance: (from credible set variants to gene) - Molecular QTL Colocalization - Variant Pathogenicity: (from VEP) More information at: https://opentargets.github.io/gentropy/python_api/methods/l2g/_l2g/ ## Intended uses & limitations [More Information Needed] ## Training Procedure Gradient Boosting Classifier ### Hyperparameters <details> <summary> Click to expand </summary> | Hyperparameter | Value | |-------------------------|-----------------| | objective | binary:logistic | | base_score | | | booster | | | callbacks | | | colsample_bylevel | | | colsample_bynode | | | colsample_bytree | 0.8 | | device | | | early_stopping_rounds | | | enable_categorical | False | | eval_metric | aucpr | | feature_types | | | feature_weights | | | gamma | | | grow_policy | | | importance_type | | | interaction_constraints | | | learning_rate | | | max_bin | | | max_cat_threshold | | | max_cat_to_onehot | | | max_delta_step | | | max_depth | 5 | | max_leaves | | | min_child_weight | 10 | | missing | nan | | monotone_constraints | | | multi_strategy | | | n_estimators | | | n_jobs | | | num_parallel_tree | | | random_state | 777 | | reg_alpha | 1 | | reg_lambda | 1.0 | | sampling_method | | | scale_pos_weight | 0.8 | | subsample | 0.8 | | tree_method | | | validate_parameters | | | verbosity | | | eta | 0.05 | </details> # How to Get Started with the Model To use the model, you can load it using the `LocusToGeneModel.load_from_hub` method. This will return a `LocusToGeneModel` object that can be used to make predictions on a feature matrix. The model can then be used to make predictions using the `predict` method. More information can be found at: https://opentargets.github.io/gentropy/python_api/methods/l2g/model/ # Citation https://doi.org/10.1038/s41588-021-00945-5 # License MIT
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755619685
Elizavr
2025-08-19T16:08:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:08:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755618074
helmutsukocok
2025-08-19T16:08:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:08:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/git-commit-message-splitter-Qwen3-4B-i1-GGUF
mradermacher
2025-08-19T16:08:07Z
0
0
null
[ "gguf", "region:us" ]
null
2025-08-19T16:08:01Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/Tavernari/git-commit-message-splitter-Qwen3-4B
mehdirafiei/bert_resume_category_prediction
mehdirafiei
2025-08-19T16:07:36Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T16:07:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/galicIA-v1-GGUF
mradermacher
2025-08-19T16:05:42Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:pajon1/galicIA-v1", "base_model:quantized:pajon1/galicIA-v1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T16:00:38Z
--- base_model: pajon1/galicIA-v1 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/pajon1/galicIA-v1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#galicIA-v1-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/galicIA-v1-GGUF/resolve/main/galicIA-v1.f16.gguf) | f16 | 1.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
AnonymousCS/xlmr_finnish_immigration2
AnonymousCS
2025-08-19T16:04:23Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-19T16:00:05Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: xlmr_finnish_immigration2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_finnish_immigration2 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1698 - Accuracy: 0.9538 - 1-f1: 0.9318 - 1-recall: 0.9535 - 1-precision: 0.9111 - Balanced Acc: 0.9538 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:| | 0.5778 | 1.0 | 5 | 0.2275 | 0.9154 | 0.8571 | 0.7674 | 0.9706 | 0.8780 | | 0.1219 | 2.0 | 10 | 0.3406 | 0.9385 | 0.9130 | 0.9767 | 0.8571 | 0.9481 | | 0.2571 | 3.0 | 15 | 0.2051 | 0.9462 | 0.9213 | 0.9535 | 0.8913 | 0.9480 | | 0.1514 | 4.0 | 20 | 0.1689 | 0.9538 | 0.9318 | 0.9535 | 0.9111 | 0.9538 | | 0.1368 | 5.0 | 25 | 0.1816 | 0.9462 | 0.9231 | 0.9767 | 0.875 | 0.9539 | | 0.1073 | 6.0 | 30 | 0.1698 | 0.9538 | 0.9318 | 0.9535 | 0.9111 | 0.9538 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755619213
Elizavr
2025-08-19T16:00:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T16:00:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
annasoli/Qwen2.5-14B_SVt_l24_lr2e-4_a256_2E_technical-engineering2_KLBPA_5e6
annasoli
2025-08-19T15:59:44Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T14:51:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QinShiHuangisavailable/output0043
QinShiHuangisavailable
2025-08-19T15:53:45Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:Qwen/Qwen2-Math-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2-Math-1.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-08-19T15:26:45Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-Math-1.5B-Instruct tags: - generated_from_trainer model-index: - name: output0043 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output0043 This model is a fine-tuned version of [Qwen/Qwen2-Math-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-Math-1.5B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
ShadoWeysel/blockassist-bc-aquatic_placid_skunk_1755618703
ShadoWeysel
2025-08-19T15:53:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic placid skunk", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:53:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic placid skunk --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
indoempatnol/blockassist-bc-fishy_wary_swan_1755617105
indoempatnol
2025-08-19T15:53:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:53:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755617165
ihsanridzi
2025-08-19T15:53:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry flexible owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:53:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry flexible owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/rejection_detection-GGUF
mradermacher
2025-08-19T15:52:44Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "rejection", "no_answer", "chatgpt", "en", "dataset:argilla/notus-uf-dpo-closest-rejected", "base_model:holistic-ai/rejection_detection", "base_model:quantized:holistic-ai/rejection_detection", "license:apache-2.0", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-08-19T15:49:39Z
--- base_model: holistic-ai/rejection_detection datasets: - argilla/notus-uf-dpo-closest-rejected language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - rejection - no_answer - chatgpt --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/holistic-ai/rejection_detection <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#rejection_detection-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/rejection_detection-GGUF/resolve/main/rejection_detection.f16.gguf) | f16 | 0.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mang3dd/blockassist-bc-tangled_slithering_alligator_1755617041
mang3dd
2025-08-19T15:52:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:52:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MidnightRunner/MIDNIGHT_NAI-XL_vPredV1
MidnightRunner
2025-08-19T15:50:23Z
406
2
diffusers
[ "diffusers", "SDXL", "noobai-XL", "Vpred-1.0", "text-to-image", "ComfyUI", "Automatic1111", "Diffuser", "en", "dataset:LaxharLab/NoobAI-XL-dataset", "base_model:Laxhar/noobai-XL-Vpred-1.0", "base_model:finetune:Laxhar/noobai-XL-Vpred-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-02-02T01:09:01Z
--- license: creativeml-openrail-m language: - en base_model: Laxhar/noobai-XL-Vpred-1.0 tags: - SDXL - noobai-XL - Vpred-1.0 - text-to-image - ComfyUI - Automatic1111 - Diffuser pipeline_tag: text-to-image library_name: diffusers datasets: - LaxharLab/NoobAI-XL-dataset metrics: - FID - IS widget: - text: >- high quality, masterpiece, detailed, 8K, artist:nyantcha, evangeline_(nyantcha), vibrant surreal artwork, rainbow, light particles, from above, volumetric lighting, ((adult girl:1.2)), natural huge breasts, woman dressed as white rabbit, sleek pure white outfit, delicate white bunny ears, braid, plump, skindentation, huge breasts, falling into swirling black hole, seen from behind, glancing over shoulder, alluring mysterious expression, dress, zipper, zipper pull, detached sleeves, breasts apart (shoulder straps), buckles, long dress, swirling cosmic patterns, glowing particles, dramatic lighting, vibrant neon pink and blue tones, hyper-detailed, cinematic depth of field, smooth texture, film grain, chromatic aberration, high contrast, limited palette parameters: negative_prompt: >- lowres, worst quality, low quality, bad anatomy, bad hands, 4koma, comic, greyscale, censored, jpeg artifacts, overly saturated, overly vivid, (multiple views:1.1), (bad:1.05), fewer, extra, missing, worst quality, jpeg artifacts, bad quality, watermark, unfinished, displeasing, sepia, sketch, flat color, signature, artistic error, username, scan, (blurry, lowres, worst quality, (low quality:1.1), ugly, (bad anatomy:1.05), artist name, (patreon username:1.2) output: url: stand_on_ripplewater.jpeg --- # MIDNIGHT_NAI-XL_vPredV1 **Model Type:** Diffusion-based text-to-image generative model **Base Model:** SDXL 1.0 & Laxhar/noobai-XL-Vpred-1.0 **License:** [CreativeML Open RAIL++-M](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE) ## Model Description MIDNIGHT_NAI-XL_vPredV1 is a specialized fine-tuning of the NoobAI-XL (NAI-XL) model, designed to enhance anatomical precision, compositional coherence, and versatile style integration. This model excels in generating high-quality images with vibrant colors while minimizing overexposure. ## Usage Recommendations ### **Sampling Methods** MIDNIGHT_NAI-XL_vPred is optimized specifically for **Euler (normal)**. Use **ModelSamplingDiscrete** with **V-prediction** and **ZsNR set to true**. Other samplers may not provide stable results, and **V-prediction models do not support other samplers**. ### **CFG Scaling** **Dynamic CFG Plugin is bypassed as a backup for potential future needs.** Manually adjust **CFG scaling within a range of 3-4** for the best balance. For optimal results, a **preferred setting of 3.5** is recommended. ### **Custom Workflow** For an optimized generation process, use the [**MIDNIGHT1111_Chasm 2025-02-04**](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%202025-02-04.json) ComfyUI workflow. This workflow is specifically designed to **leverage the strengths of MIDNIGHT_NAI-XL_vPred**, providing a streamlined and efficient image generation pipeline. ## MIDNIGHT1111_Chasm For an optimized generation process, consider using the custom workflow [MIDNIGHT1111_Chasm 02-05-25](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json). This workflow is tailored to leverage the strengths of the MIDNIGHT_NAI-XL_vPredV1 model, providing a streamlined and efficient image generation pipeline. ![MIDNIGHT1111_Chasm Workflow](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/resolve/main/MIDNIGHT1111_Chasm%20Workflow.png) *Note: The above image is a preview of the `MIDNIGHT1111_Chasm` workflow.* ### Method I: reForge without MIDNIGHT1111_Chasm Workflow 1. **Installation:** If not already installed, follow the instructions in the [reForge repository](https://github.com/Panchovix/stable-diffusion-webui-reForge) to set up. 2. **Usage:** Launch WebUI and use the model as usual. ### Method II: ComfyUI *with* MIDNIGHT1111_Chasm Workflow 1. **Installation:** Follow the setup instructions in the [ComfyUI repository](https://github.com/comfyanonymous/ComfyUI). 2. **Workflow Sample:** Utilize the provided [ComfyUI workflow sample](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/MIDNIGHT1111_Chasm%2002-05-25.json) for guidance. ### Method III: WebUI without MIDNIGHT1111_Chasm Workflow 1. **Installation:** Follow the instructions in the [WebUI repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to set up. 2. **Navigate to the WebUI Directory:** Before updating or switching branches, ensure you're inside the `stable-diffusion-webui` folder command: | ```bash cd stable-diffusion-webui ``` 3. **Switch to the Development Branch (Optional, for testing new features):** If you want to use the latest features from the development branch, run: command: | ```bash git switch dev git pull ``` ⚠️ **Note:** The `dev` branch may contain bugs. If stability is your priority, it's best to stay on the `main` branch. 4. **Update WebUI (Main or Dev Branch):** To pull the latest updates while on either branch, run: command: | ```bash git pull ``` πŸ”„ **Restart WebUI after updating to apply changes.**" 5. **Configuration:** Ensure you're using a stable branch, as the dev branch may contain bugs. ### Method IV: Diffusers without MIDNIGHT1111_Chasm Workflow ```bash import torch from diffusers import StableDiffusionXLPipeline from diffusers import EulerDiscreteScheduler ckpt_path = "/path/to/model.safetensors" pipe = StableDiffusionXLPipeline.from_single_file( ckpt_path, use_safetensors=True, torch_dtype=torch.float16, ) scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True} pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args) pipe.enable_xformers_memory_efficient_attention() pipe = pipe.to("cuda") prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)""" negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro" image = pipe( prompt=prompt, negative_prompt=negative_prompt, width=832, height=1216, num_inference_steps=28, guidance_scale=5, generator=torch.Generator().manual_seed(42), ).images[0] image.save("output.png") ``` ## e621/Danbooru Artist Wildcards for A1111 & ComfyUI Enclosed in CSV & TXT Formats To enhance the model's performance and specificity, the following trigger word lists in CSV format are included: - [`danbooru_artist_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_webui.csv) - [`danbooru_character_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_webui.csv) - [`e621_artist_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_webui.csv) - [`e621_character_webui.csv`](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_webui.csv) These lists provide recognized tags for various artists and characters, facilitating more accurate and tailored image generation. The wildcard file in 'TXT' format is included and designed for seamless integration with **AUTOMATIC1111** and **ComfyUI**, optimized for dynamic prompt generation using artist data from **e621** and **Danbooru**. - **TXT Format:** Sanitized artist tags by removing URLs and converted from `.csv` to `.txt` format for improved readability across different extensions. - **Dual Dataset Support:** Supports both e621 and Danbooru datasets to enhance art style diversity. - **Smooth Randomization:** Structured with trailing commas for seamless wildcard cycling during prompt generation. ## How to Use Wildcards ### For A1111 1. **Install:** [stable-diffusion-webui-wildcards](https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards) 2. **Place the `.txt` file in:** ``` /A1111/extensions/stable-diffusion-webui-wildcards ``` 3. **Use in your prompt like this:** ``` __e621_artist_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ``` __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ``` __e621_artist_wildcard__, __danbooru_character_wildcard__, very awa, masterpiece, best quality, amazing quality ``` ### For ComfyUI 1. **Install:** [ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack) 2. **Place the `.txt` file in:** ``` /ComfyUI/custom_nodes/ComfyUI-Impact-Pack/wildcards ``` or ``` /ComfyUI/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards ``` 3. **Use the wildcard node to trigger dynamic randomization in your workflows.** ## What’s Included in Wildcards TXT formatted file containing clean, artist-focused wildcard files ready for dynamic prompt workflows in A1111 and ComfyUI. - [danbooru_artist_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_artist_wildcard.txt) - [danbooru_character_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/danbooru_character_wildcard.txt) - [e621_artist_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_artist_wildcard.txt) - [e621_character_wildcard.txt](https://huggingface.co/MidnightRunner/MIDNIGHT_NAI-XL_vPredV1/blob/main/e621_character_wildcard.txt) ## Acknowledgments Special thanks to: - **Development Team:** Laxhar Lab - **Coding Contributions:** Euge - **e621/Danbooru Wildcards** [ipsylon0000](https://civitai.com/user/ipsylon0000) - **Community Support:** Various contributors ## Additional Resources - **Guidebook for NoobAI XL:** [English Version](https://civitai.com/articles/8962) - **Recommended LoRa List for NoobAI XL:** [Resource Link](https://fcnk27d6mpa5.feishu.cn/wiki/IBVGwvVGViazLYkMgVEcvbklnge) - **Fixing Black Images in ComfyUI on macOS (M1/M2):** [Read the Article](https://civitai.com/articles/11106) - **Creative Solutions and Services:** [Magnabos.co](https://magnabos.co/) ## License This model is licensed under the [CreativeML Open RAIL++-M License](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). By using this model, you agree to the terms and conditions outlined in the license.
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755616921
lisaozill03
2025-08-19T15:49:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:48:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jacoboss/MyGemmaNPC
jacoboss
2025-08-19T15:48:33Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-18T21:28:50Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: MyGemmaNPC tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for MyGemmaNPC This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jacoboss/MyGemmaNPC", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.6.0+cu124 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DeathGodlike/Rei-24B-KTO_EXL3
DeathGodlike
2025-08-19T15:46:54Z
0
0
safetensors
[ "safetensors", "KTO", "roleplaying", "prose", "mistral", "24B", "exl3", "4-bit", "6-bit", "8-bit", "text-generation", "base_model:Delta-Vector/Rei-24B-KTO", "base_model:quantized:Delta-Vector/Rei-24B-KTO", "license:apache-2.0", "region:us" ]
text-generation
2025-08-19T15:46:53Z
--- license: apache-2.0 base_model: - Delta-Vector/Rei-24B-KTO base_model_relation: quantized pipeline_tag: text-generation library_name: safetensors tags: - KTO - roleplaying - prose - mistral - 24B - exl3 - 4-bit - 6-bit - 8-bit --- ## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/Rei-24B-KTO_EXL3/tree/H8-4.0BPW) | [H8-6.0BPW](https://huggingface.co/DeathGodlike/Rei-24B-KTO_EXL3/tree/H8-6.0BPW) | [H8-8.0BPW](https://huggingface.co/DeathGodlike/Rei-24B-KTO_EXL3/tree/H8-8.0BPW) ] # Original model: [Rei-24B-KTO](https://huggingface.co/Delta-Vector/Rei-24B-KTO) by [Delta-Vector](https://huggingface.co/Delta-Vector)
aleebaster/blockassist-bc-sly_eager_boar_1755616783
aleebaster
2025-08-19T15:41:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:41:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
WenFengg/21_14l3_19__8
WenFengg
2025-08-19T15:37:51Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-19T14:56:20Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
lobsang41/lucky-planograms-gemma-3-4b
lobsang41
2025-08-19T15:37:49Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "endpoints_compatible", "region:us" ]
null
2025-08-19T14:46:20Z
--- base_model: google/gemma-3-4b-it library_name: transformers model_name: lucky-planograms-gemma-3-4b tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for lucky-planograms-gemma-3-4b This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lobsang41/lucky-planograms-gemma-3-4b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755616149
vwzyrraz7l
2025-08-19T15:36:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:36:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755617735
Elizavr
2025-08-19T15:36:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:36:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Agentic-1.0-GGUF
mradermacher
2025-08-19T15:34:19Z
0
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3", "en", "base_model:beyoru/Agentic-1.0", "base_model:quantized:beyoru/Agentic-1.0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-19T15:04:54Z
--- base_model: beyoru/Agentic-1.0 language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/beyoru/Agentic-1.0 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Agentic-1.0-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q5_K_M.gguf) | Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q6_K.gguf) | Q6_K | 3.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Agentic-1.0-GGUF/resolve/main/Agentic-1.0.f16.gguf) | f16 | 8.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755617387
lqpl
2025-08-19T15:31:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy insectivorous antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:30:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy insectivorous antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AppliedLucent/nemo-phase4
AppliedLucent
2025-08-19T15:31:28Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:AppliedLucent/nemo-phase3", "base_model:finetune:AppliedLucent/nemo-phase3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T15:18:37Z
--- base_model: AppliedLucent/nemo-phase3 tags: - text-generation-inference - transformers - unsloth - mistral license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** AppliedLucent - **License:** apache-2.0 - **Finetuned from model :** AppliedLucent/nemo-phase3 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ucmp137538/best_RPT_coder_mathrl_ckpt-1000
ucmp137538
2025-08-19T15:22:35Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T15:19:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755615143
kojeklollipop
2025-08-19T15:21:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:21:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
TAUR-dev/M-voting_setup3_1epch_1e6_all_tasks_only_sft-sft
TAUR-dev
2025-08-19T15:20:09Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-08-19T15:18:45Z
# M-voting_setup3_1epch_1e6_all_tasks_only_sft-sft This model was created as part of the **voting_setup3_1epch_1e6_all_tasks_only_sft** experiment using the SkillFactory experiment management system. ## Model Details - **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning) - **Stage Name**: sft - **Experiment**: voting_setup3_1epch_1e6_all_tasks_only_sft ## Training Configuration {"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_voting_setup3_1epch_1e6_all_tasks_only_sft_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/datastor1/mwadhwa/skill_inject_outputs/sf_experiments/skills_in_rl/voting_setup3_1epch_1e6_all_tasks_only_sft/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__voting_setup3_1epch_1e6_all_tasks_only_sft__v1", "sf_eval_before_training": false, "sf_wandb_project": "voting_setup3_1epch_1e6_all_tasks_only_sft_sft", "sf_eval_steps": null, "run_name": "voting_setup3_1epch_1e6_all_tasks_only_sft_sft"} ## Experiment Tracking πŸ”— **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__voting_setup3_1epch_1e6_all_tasks_only_sft__v1) ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-voting_setup3_1epch_1e6_all_tasks_only_sft-sft") model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-voting_setup3_1epch_1e6_all_tasks_only_sft-sft") ```
Muapi/vintage-drawing-ce
Muapi
2025-08-19T15:18:13Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:18:02Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Vintage Drawing - CE ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: vntgdrwngCE style ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:660535@811004", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Bahrom1996/whisper-uz
Bahrom1996
2025-08-19T15:16:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "uz", "dataset:common_voice_14_0", "base_model:jmshd/whisper-uz", "base_model:finetune:jmshd/whisper-uz", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-18T12:38:14Z
--- library_name: transformers language: - uz license: apache-2.0 base_model: jamshidahmadov/whisper-uz tags: - generated_from_trainer datasets: - common_voice_14_0 metrics: - wer model-index: - name: Whisper base uz - Bahrom results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_14_0 type: common_voice_14_0 config: uz split: test args: 'config: uz, split: test' metrics: - name: Wer type: wer value: 39.4953893762244 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper base uz - Bahrom This model is a fine-tuned version of [jamshidahmadov/whisper-uz](https://huggingface.co/jamshidahmadov/whisper-uz) on the common_voice_14_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4621 - Wer: 39.4954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.5759 | 0.1323 | 500 | 0.4621 | 39.4954 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.5.0 - Datasets 3.3.2 - Tokenizers 0.21.0
Muapi/360-panorama-sd1.5-flux
Muapi
2025-08-19T15:15:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:15:24Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # 360 panorama [SD1.5 / FLUX] ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: 360, panorama, spherical panorama ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:118398@756096", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755615004
lisaozill03
2025-08-19T15:15:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:15:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kodetr/stunting-7B-Qwen
kodetr
2025-08-19T15:15:29Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "stunting", "kesehatan", "anak", "conversational", "id", "dataset:kodetr/penelitian-fundamental-stunting-qa", "base_model:Qwen/Qwen1.5-7B-Chat", "base_model:finetune:Qwen/Qwen1.5-7B-Chat", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:59:41Z
--- library_name: transformers tags: - stunting - kesehatan - anak license: apache-2.0 datasets: - kodetr/penelitian-fundamental-stunting-qa language: - id metrics: - rouge - bleu pipeline_tag: text-generation base_model: - Qwen/Qwen1.5-7B-Chat --- ### Model Description <!-- Provide a longer summary of what this model is. --> Konsultasi(Q&A) stunting pada anak - **Developed by:** Tanwir - **Language :** Indonesia ### Training ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d6d2f8b06abf924b24349d/ZmKG5B9AapbcvAzXdfkYZ.png) ### Use with transformers Pastikan untuk memperbarui instalasi transformer Anda melalui pip install --upgrade transformer. ```python import torch from transformers import pipeline model_id = "kodetr/stunting-7B-Qwen" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "Jelaskan definisi 1000 hari pertama kehidupan."}, {"role": "user", "content": "Apa itu 1000 hari pertama kehidupan?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ```
phospho-app/Deimos252-ACT_BBOX-Light_dataset_deimos-yykfs
phospho-app
2025-08-19T15:14:42Z
0
0
phosphobot
[ "phosphobot", "act", "robotics", "dataset:Deimos252/Light_dataset_deimos", "region:us" ]
robotics
2025-08-19T15:14:06Z
--- datasets: Deimos252/Light_dataset_deimos library_name: phosphobot pipeline_tag: robotics model_name: act tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` 1 validation error for EpisodesFeatures Invalid JSON: EOF while parsing a value at line 2 column 0 [type=json_invalid, input_value='\n', input_type=str] For further information visit https://errors.pydantic.dev/2.11/v/json_invalid ``` ## Training parameters: - **Dataset**: [Deimos252/Light_dataset_deimos](https://huggingface.co/datasets/Deimos252/Light_dataset_deimos) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 πŸ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) πŸ€– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
yaryar78/DDPM-Ray-dog
yaryar78
2025-08-19T15:13:33Z
0
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2025-08-19T14:55:50Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) fine tune on google/ddpm-cat-256 with dataset yaryar78/Ray_dog ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('yaryar78/ddpm-Ray-dog') image = pipeline().images[0] image ```
Muapi/zavy-s-fluorescent-flux
Muapi
2025-08-19T15:11:56Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:11:43Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Zavy's Fluorescent - Flux ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: zavy-flrscnt ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:737408@824658", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
maheshkommuri/depthmap
maheshkommuri
2025-08-19T15:10:29Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T15:08:41Z
--- license: apache-2.0 ---
Muapi/alex-gross-style
Muapi
2025-08-19T15:09:51Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:09:38Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Alex Gross Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Alex Gross Style ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:96381@1407451", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755614551
sampingkaca72
2025-08-19T15:08:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:08:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kurosawama/gemma-3-1b-it-Retranslation-align
Kurosawama
2025-08-19T15:07:32Z
0
0
transformers
[ "transformers", "safetensors", "trl", "dpo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T15:07:28Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Muapi/pascal-blanch
Muapi
2025-08-19T15:06:50Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:06:40Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Pascal BlanchΓ© ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: By Passcal BlanchΓ© ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1285926@1274884", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755614240
pempekmangedd
2025-08-19T15:06:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "patterned sturdy dolphin", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:06:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - patterned sturdy dolphin --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/josh-agle-shag-style
Muapi
2025-08-19T15:04:28Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T15:04:18Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Josh Agle (SHAG) Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Josh Agle (SHAG) Style ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:103382@1616823", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
rbelanec/train_svamp_1755615499
rbelanec
2025-08-19T15:03:29Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "prefix-tuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-08-19T14:58:45Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - prefix-tuning - generated_from_trainer model-index: - name: train_svamp_1755615499 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_svamp_1755615499 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset. It achieves the following results on the evaluation set: - Loss: 0.1893 - Num Input Tokens Seen: 705184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 123 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:-----:|:----:|:---------------:|:-----------------:| | 0.7697 | 0.5 | 79 | 0.6681 | 35776 | | 0.5968 | 1.0 | 158 | 0.5173 | 70672 | | 0.1124 | 1.5 | 237 | 0.1794 | 105904 | | 0.132 | 2.0 | 316 | 0.1370 | 141328 | | 0.1259 | 2.5 | 395 | 0.1006 | 176752 | | 0.0482 | 3.0 | 474 | 0.0846 | 211808 | | 0.0378 | 3.5 | 553 | 0.1207 | 247104 | | 0.0761 | 4.0 | 632 | 0.0935 | 282048 | | 0.0108 | 4.5 | 711 | 0.1449 | 317248 | | 0.0208 | 5.0 | 790 | 0.1160 | 352592 | | 0.0152 | 5.5 | 869 | 0.1450 | 388176 | | 0.0132 | 6.0 | 948 | 0.1488 | 423184 | | 0.0151 | 6.5 | 1027 | 0.1474 | 458640 | | 0.0004 | 7.0 | 1106 | 0.1693 | 493440 | | 0.0006 | 7.5 | 1185 | 0.1817 | 528768 | | 0.0001 | 8.0 | 1264 | 0.1838 | 563872 | | 0.0 | 8.5 | 1343 | 0.1869 | 599232 | | 0.0002 | 9.0 | 1422 | 0.1876 | 634544 | | 0.0004 | 9.5 | 1501 | 0.1893 | 670064 | | 0.0001 | 10.0 | 1580 | 0.1893 | 705184 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
2hpsatt/blockassist-bc-huge_deft_eagle_1755615679
2hpsatt
2025-08-19T15:02:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T15:01:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/randommaxx-mecharmor
Muapi
2025-08-19T15:01:21Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:59:10Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # RandomMaxx MechArmor ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: MechArmor, Robot, Mech, Heavy Mech, Power Armor, Cyborg ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:737782@1209804", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
wjbmattingly/lfm2-vl-450M-yiddish
wjbmattingly
2025-08-19T14:58:01Z
0
0
null
[ "safetensors", "lfm2-vl", "custom_code", "base_model:LiquidAI/LFM2-VL-450M", "base_model:finetune:LiquidAI/LFM2-VL-450M", "region:us" ]
null
2025-08-19T14:57:50Z
--- base_model: - LiquidAI/LFM2-VL-450M --- # model_step_13000 ## Model Description This model is a fine-tuned version of **LiquidAI/LFM2-VL-450M** using the brute-force-training package. - **Base Model**: LiquidAI/LFM2-VL-450M - **Training Status**: πŸ”„ In Progress - **Generated**: 2025-08-19 10:41:14 - **Training Steps**: 13,000 ## Training Details ### Dataset - **Dataset**: johnlockejrr/yiddish_synth_v2 - **Training Examples**: 100,000 - **Validation Examples**: 4,999 ### Training Configuration - **Max Steps**: 100,000 - **Batch Size**: 15 - **Learning Rate**: 7e-05 - **Gradient Accumulation**: 1 steps - **Evaluation Frequency**: Every 1,000 steps ### Current Performance - **Training Loss**: 0.124526 - **Evaluation Loss**: 0.189137 ## Pre-Training Evaluation **Initial Model Performance (before training):** - **Loss**: 2.626098 - **Perplexity**: 13.82 - **Character Accuracy**: 31.1% - **Word Accuracy**: 12.9% ## Evaluation History ### All Checkpoint Evaluations | Step | Checkpoint Type | Loss | Perplexity | Char Acc | Word Acc | Improvement vs Pre | |------|----------------|------|------------|----------|----------|--------------------| | Pre | pre_training | 2.6261 | 13.82 | 31.1% | 12.9% | +0.0% | | 1,000 | checkpoint | 0.9395 | 2.56 | 20.1% | 4.1% | +64.2% | | 2,000 | checkpoint | 0.8058 | 2.24 | 21.2% | 4.0% | +69.3% | | 3,000 | checkpoint | 0.7305 | 2.08 | 23.0% | 6.1% | +72.2% | | 4,000 | checkpoint | 0.6669 | 1.95 | 20.6% | 3.4% | +74.6% | | 5,000 | checkpoint | 0.5341 | 1.71 | 21.4% | 3.6% | +79.7% | | 6,000 | checkpoint | 0.4656 | 1.59 | 20.9% | 3.8% | +82.3% | | 7,000 | checkpoint | 0.3917 | 1.48 | 21.4% | 3.5% | +85.1% | | 8,000 | checkpoint | 0.3310 | 1.39 | 21.6% | 4.8% | +87.4% | | 9,000 | checkpoint | 0.2892 | 1.34 | 20.7% | 4.0% | +89.0% | | 10,000 | checkpoint | 0.2566 | 1.29 | 20.9% | 4.7% | +90.2% | | 11,000 | checkpoint | 0.2199 | 1.25 | 20.2% | 4.9% | +91.6% | | 12,000 | checkpoint | 0.2033 | 1.23 | 20.3% | 3.2% | +92.3% | | 13,000 | checkpoint | 0.1891 | 1.21 | 19.4% | 3.4% | +92.8% | ## Training Progress ### Recent Training Steps (Loss Only) | Step | Training Loss | Timestamp | |------|---------------|-----------| | 12,991 | 0.154684 | 2025-08-19T10:40 | | 12,992 | 0.183019 | 2025-08-19T10:40 | | 12,993 | 0.157314 | 2025-08-19T10:40 | | 12,994 | 0.168899 | 2025-08-19T10:40 | | 12,995 | 0.116096 | 2025-08-19T10:40 | | 12,996 | 0.122316 | 2025-08-19T10:40 | | 12,997 | 0.149480 | 2025-08-19T10:40 | | 12,998 | 0.166267 | 2025-08-19T10:40 | | 12,999 | 0.152927 | 2025-08-19T10:40 | | 13,000 | 0.124526 | 2025-08-19T10:40 | ## Training Visualizations ### Training Progress and Evaluation Metrics ![Training Curves](training_curves.png) *This chart shows the training loss progression, character accuracy, word accuracy, and perplexity over time. Red dots indicate evaluation checkpoints.* ### Evaluation Comparison Across All Checkpoints ![Evaluation Comparison](evaluation_comparison.png) *Comprehensive comparison of all evaluation metrics across training checkpoints. Red=Pre-training, Blue=Checkpoints, Green=Final.* ### Available Visualization Files: - **`training_curves.png`** - 4-panel view: Training loss with eval points, Character accuracy, Word accuracy, Perplexity - **`evaluation_comparison.png`** - 4-panel comparison: Loss, Character accuracy, Word accuracy, Perplexity across all checkpoints ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer # For vision-language models, use appropriate imports model = AutoModelForCausalLM.from_pretrained("./model_step_13000") tokenizer = AutoTokenizer.from_pretrained("./model_step_13000") # Your inference code here ``` ## Training Configuration ```json { "dataset_name": "johnlockejrr/yiddish_synth_v2", "model_name": "LiquidAI/LFM2-VL-450M", "max_steps": 100000, "eval_steps": 1000, "num_accumulation_steps": 1, "learning_rate": 7e-05, "train_batch_size": 15, "val_batch_size": 1, "train_select_start": 0, "train_select_end": 100000, "val_select_start": 100001, "val_select_end": 105000, "train_field": "train", "val_field": "train", "image_column": "image", "text_column": "text", "user_text": "Please transcribe all the Yiddish text you see in this historical manuscript image. Provide only the transcribed text without any additional commentary or description.", "max_image_size": 250 } ``` ## Model Card Metadata - **Base Model**: LiquidAI/LFM2-VL-450M - **Training Framework**: brute-force-training - **Training Type**: Fine-tuning - **License**: Inherited from base model - **Language**: Inherited from base model --- *This model card was automatically generated by brute-force-training on 2025-08-19 10:41:14*
matheoqtb/EuroBertV2final
matheoqtb
2025-08-19T14:56:59Z
0
0
null
[ "safetensors", "eurobert", "custom_code", "region:us" ]
null
2025-08-19T14:56:50Z
# Checkpoint exportΓ©: final Ce dΓ©pΓ΄t contient un checkpoint extrait de `matheoqtb/euroBertV2_test2` (sous-dossier `final`) et les fichiers de code nΓ©cessaires provenant de `EuroBERT/EuroBERT-610m`. Chargement: from transformers import AutoTokenizer, AutoModel tok = AutoTokenizer.from_pretrained('<THIS_REPO>', trust_remote_code=True) mdl = AutoModel.from_pretrained('<THIS_REPO>', trust_remote_code=True) TΓ’che: feature-extraction (embeddings)
Muapi/tifa-lockhart-ffviir
Muapi
2025-08-19T14:56:12Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:55:53Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Tifa Lockhart (FFVIIR) ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: TifaLockhart, croptop, skirt, suspenders, fingerless gloves ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:661363@740105", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
KMH158/t5-small-openassistant-chat
KMH158
2025-08-19T14:54:39Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-19T12:36:35Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer model-index: - name: t5-small-openassistant-chat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-openassistant-chat This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 80 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3768 | 1.0 | 301 | 2.3842 | | 2.6839 | 2.0 | 602 | 2.3277 | | 2.6351 | 3.0 | 903 | 2.2995 | | 2.6016 | 4.0 | 1204 | 2.2818 | | 2.5803 | 5.0 | 1505 | 2.2680 | | 2.5587 | 6.0 | 1806 | 2.2571 | | 2.541 | 7.0 | 2107 | 2.2481 | | 2.5323 | 8.0 | 2408 | 2.2409 | | 2.5102 | 9.0 | 2709 | 2.2349 | | 2.5063 | 10.0 | 3010 | 2.2288 | | 2.4953 | 11.0 | 3311 | 2.2242 | | 2.4926 | 12.0 | 3612 | 2.2192 | | 2.4786 | 13.0 | 3913 | 2.2154 | | 2.472 | 14.0 | 4214 | 2.2117 | | 2.4662 | 15.0 | 4515 | 2.2079 | | 2.4553 | 16.0 | 4816 | 2.2051 | | 2.4472 | 17.0 | 5117 | 2.2020 | | 2.4488 | 18.0 | 5418 | 2.2008 | | 2.4367 | 19.0 | 5719 | 2.1972 | | 2.4353 | 20.0 | 6020 | 2.1952 | | 2.429 | 21.0 | 6321 | 2.1934 | | 2.4247 | 22.0 | 6622 | 2.1912 | | 2.4242 | 23.0 | 6923 | 2.1901 | | 2.4196 | 24.0 | 7224 | 2.1887 | | 2.4169 | 25.0 | 7525 | 2.1873 | | 2.4122 | 26.0 | 7826 | 2.1862 | | 2.4089 | 27.0 | 8127 | 2.1851 | | 2.4042 | 28.0 | 8428 | 2.1841 | | 2.4061 | 29.0 | 8729 | 2.1831 | | 2.4007 | 30.0 | 9030 | 2.1823 | | 2.397 | 31.0 | 9331 | 2.1814 | | 2.3998 | 32.0 | 9632 | 2.1810 | | 2.3963 | 33.0 | 9933 | 2.1805 | | 2.3976 | 34.0 | 10234 | 2.1798 | | 2.3919 | 35.0 | 10535 | 2.1794 | | 2.3873 | 36.0 | 10836 | 2.1793 | | 2.3899 | 37.0 | 11137 | 2.1789 | | 2.3886 | 38.0 | 11438 | 2.1786 | | 2.3906 | 39.0 | 11739 | 2.1786 | | 2.393 | 40.0 | 12040 | 2.1785 | ### Framework versions - Transformers 4.55.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
prl90777/R1_Qwen3_8B_0719
prl90777
2025-08-19T14:48:53Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "lora", "transformers", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "license:mit", "region:us" ]
null
2025-08-19T11:31:10Z
--- library_name: peft license: mit base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B tags: - base_model:adapter:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B - lora - transformers model-index: - name: R1_Qwen3_8B_0719 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # R1_Qwen3_8B_0719 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4267 - Map@3: 0.9177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Map@3 | |:-------------:|:------:|:----:|:---------------:|:------:| | 26.6085 | 0.0523 | 20 | 1.3286 | 0.7507 | | 9.7222 | 0.1046 | 40 | 1.0625 | 0.7933 | | 7.9943 | 0.1569 | 60 | 0.8487 | 0.8183 | | 7.4982 | 0.2092 | 80 | 0.8259 | 0.8315 | | 6.7844 | 0.2615 | 100 | 0.7845 | 0.8407 | | 6.1752 | 0.3138 | 120 | 0.7051 | 0.8571 | | 5.3012 | 0.3661 | 140 | 0.6606 | 0.8683 | | 4.7654 | 0.4184 | 160 | 0.5941 | 0.8830 | | 5.3467 | 0.4707 | 180 | 0.6074 | 0.8771 | | 4.4068 | 0.5230 | 200 | 0.5947 | 0.8880 | | 4.9025 | 0.5754 | 220 | 0.5081 | 0.8986 | | 4.3179 | 0.6277 | 240 | 0.5520 | 0.8941 | | 4.4065 | 0.6800 | 260 | 0.4970 | 0.9040 | | 3.7451 | 0.7323 | 280 | 0.4987 | 0.9045 | | 4.4839 | 0.7846 | 300 | 0.4905 | 0.9085 | | 3.5164 | 0.8369 | 320 | 0.4644 | 0.9067 | | 3.9504 | 0.8892 | 340 | 0.4650 | 0.9066 | | 3.6298 | 0.9415 | 360 | 0.4461 | 0.9106 | | 3.6195 | 0.9938 | 380 | 0.4242 | 0.9173 | | 3.0214 | 1.0445 | 400 | 0.5402 | 0.9058 | | 2.7135 | 1.0968 | 420 | 0.4302 | 0.9203 | | 2.6106 | 1.1491 | 440 | 0.4071 | 0.9252 | | 2.8122 | 1.2014 | 460 | 0.4366 | 0.9188 | | 3.0033 | 1.2537 | 480 | 0.4178 | 0.9230 | | 2.59 | 1.3060 | 500 | 0.4116 | 0.9233 | | 3.0395 | 1.3583 | 520 | 0.4267 | 0.9177 | ### Framework versions - PEFT 0.17.0 - Transformers 4.55.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
lilTAT/blockassist-bc-gentle_rugged_hare_1755614706
lilTAT
2025-08-19T14:45:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:45:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Team-Atom/act_record_pp_blue001_96_100000
Team-Atom
2025-08-19T14:41:55Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:Team-Atom/PiPl_blue_001", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-19T14:41:42Z
--- datasets: Team-Atom/PiPl_blue_001 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - robotics - lerobot - act --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
lilTAT/blockassist-bc-gentle_rugged_hare_1755614412
lilTAT
2025-08-19T14:40:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:40:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755612596
sampingkaca72
2025-08-19T14:36:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:36:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aleebaster/blockassist-bc-sly_eager_boar_1755612564
aleebaster
2025-08-19T14:34:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:34:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sarrockia/prefectIllustriousXL_v3.safetensors
sarrockia
2025-08-19T14:33:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-19T14:04:58Z
--- license: apache-2.0 ---
Andra76/blockassist-bc-deadly_enormous_butterfly_1755613857
Andra76
2025-08-19T14:31:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly enormous butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:31:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly enormous butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EnLiving-AI/CosmosC1
EnLiving-AI
2025-08-19T14:26:23Z
0
4
null
[ "Cosmos", "Learning", "Advacned_learning", "NO-API", "text-generation", "license:mit", "region:us" ]
text-generation
2025-08-19T13:15:16Z
--- license: mit pipeline_tag: text-generation tags: - Cosmos - Learning - Advacned_learning - NO-API --- # 🌌 Cosmos C1 **Autonomous Knowledge Explorer β€” v1.0** ![cosmosc1.png](https://cdn-uploads.huggingface.co/production/uploads/6884f0736963bab90a76beea/cpYTrRilKLz_vh85ABiUh.png) Cosmos C1 is an **autonomous research engine** packed into a simple `.exe` app. It explores the web, extracts knowledge, and builds structured insights β€” all without needing APIs or Python setup. Just run the `.exe` and watch your AI explore, learn, and grow its own knowledge base. --- ## ✨ Features - πŸ” **Autonomous Research Cycles** β€” Runs continuous query β†’ learn β†’ extract β†’ store loops. - 🧠 **Knowledge Extraction** β€” Identifies concepts, relationships, and facts from raw text. - πŸ“Š **Knowledge Base Growth** β€” Expands memory with each cycle. - 🌐 **No API Required** β€” Directly learns from the web. - πŸ–₯️ **Standalone .exe** β€” No Python, no installs, just double-click and go. - πŸ“œ **Summaries** β€” Generates cycle logs and session summaries. --- ## ⚑ Quick Start 1. **Download** the latest release from [Releases](https://huggingface.co/EnLiving-AI/CosmosC1/resolve/main/CosmosC1.exe). 2. Place `CosmosC1.exe` in your desired folder. 3. Double-click to launch. 4. The terminal window will start showing research cycles in real time. 5. Press `Ctrl+C` anytime to stop and see a final **Session Summary**. --- ## πŸ–ΌοΈ Example Run <code> πŸš€ Autonomous Knowledge Explorer </code><br> <code>🌐 No APIs - Direct Learning from Web</code><br> <code>Press Ctrl+C to stop and show summary</code><br> <code>πŸŒ€ CYCLE 1</code><br> <code>πŸ” Source: Web</code><br> <code>πŸ“š Query: Applications of Shakespeare</code><br> <code>πŸ“– Content Learned:</code><br> <code>... raw snippets ...</code><br> <code>πŸ’‘ Extracted Knowledge:</code><br> <code>✦ Concepts: Applications Directory, Windows</code><br> <code>✦ Relationships: Applications Directory ↔ Windows</code><br> <code>πŸ“Š Knowledge Base: 2 concepts | 1 discovery</code><br> --- At the end, Cosmos C1 shows: - βœ… **Total Cycles** - βœ… **Concepts Learned** - βœ… **Discoveries Recorded** - βœ… **Top Discoveries** - βœ… **Current Focus Area** --- ## 🎯 Use Cases - AI-driven **research assistant** - Automated **concept discovery** - Inspiration for **autonomous agent design** - Demonstration of **web knowledge extraction** --- ## 🚧 Current Limitations - Requires internet access - Works in a terminal window (no GUI yet) - May capture unrelated snippets (still improving filtering) --- ## πŸ“Œ Roadmap - [ ] GUI Dashboard - [ ] Exportable Knowledge Graphs - [ ] Smarter Query Refinement - [ ] Multi-agent collaboration --- ## πŸ“„ License MIT License β€” feel free to use, modify, and contribute. ---
zhuojing-huang/gpt2-arabic-english-ewc
zhuojing-huang
2025-08-19T14:25:57Z
22
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-18T13:49:13Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: gpt2-arabic-english-ewc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-arabic-english-ewc This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 30 - training_steps: 122070 ### Training results ### Framework versions - Transformers 4.53.1 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.2
lilTAT/blockassist-bc-gentle_rugged_hare_1755613409
lilTAT
2025-08-19T14:23:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:23:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755611635
quantumxnode
2025-08-19T14:22:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:22:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fatmhd1995/phi35_ft_llm_4_annotation_rnd1_v2
fatmhd1995
2025-08-19T14:19:49Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:16:28Z
--- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** fatmhd1995 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Joetib/en-twi-qwen2.5-0.5B-Instruct
Joetib
2025-08-19T14:19:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:19:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755611487
hakimjustbao
2025-08-19T14:19:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:19:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kazuki1450/Qwen2.5-1.5B_lightr1_3_EN_1024_1p0_0p0_1p0_sft
Kazuki1450
2025-08-19T14:18:55Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-1.5B", "base_model:finetune:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:16:21Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B tags: - generated_from_trainer model-index: - name: Qwen2.5-1.5B_lightr1_3_EN_1024_1p0_0p0_1p0_sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-1.5B_lightr1_3_EN_1024_1p0_0p0_1p0_sft This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAFACTOR and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.55.2 - Pytorch 2.7.1+cu128 - Datasets 4.0.0 - Tokenizers 0.21.2
mradermacher/UI-Venus-Navi-72B-GGUF
mradermacher
2025-08-19T14:16:46Z
0
1
transformers
[ "transformers", "gguf", "en", "base_model:inclusionAI/UI-Venus-Navi-72B", "base_model:quantized:inclusionAI/UI-Venus-Navi-72B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-18T23:56:29Z
--- base_model: inclusionAI/UI-Venus-Navi-72B language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/inclusionAI/UI-Venus-Navi-72B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#UI-Venus-Navi-72B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/UI-Venus-Navi-72B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.9 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.mmproj-f16.gguf) | mmproj-f16 | 1.5 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q2_K.gguf) | Q2_K | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q3_K_S.gguf) | Q3_K_S | 34.6 | | | [GGUF](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q3_K_L.gguf) | Q3_K_L | 39.6 | | | [GGUF](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.IQ4_XS.gguf) | IQ4_XS | 40.3 | | | [GGUF](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/UI-Venus-Navi-72B-GGUF/resolve/main/UI-Venus-Navi-72B.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755611423
lisaozill03
2025-08-19T14:15:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:15:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yaelahnal/blockassist-bc-mute_clawed_crab_1755612755
yaelahnal
2025-08-19T14:13:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:13:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vrbhalaaji/my_policy
vrbhalaaji
2025-08-19T14:13:45Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:vrbhalaaji/orange-pick-test", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-19T14:13:00Z
--- datasets: vrbhalaaji/orange-pick-test library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - lerobot - robotics --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
lilTAT/blockassist-bc-gentle_rugged_hare_1755612776
lilTAT
2025-08-19T14:13:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:13:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
abishekcodes/distil_nre_pii
abishekcodes
2025-08-19T14:11:33Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-19T14:06:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aldsouza/health-agent
aldsouza
2025-08-19T14:08:07Z
104
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "medical", "function-calling", "llm", "healthcare", "conversational-ai", "conversational", "en", "dataset:Salesforce/xlam-function-calling-60k", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-12T23:46:13Z
--- library_name: transformers tags: - medical - function-calling - llm - healthcare - conversational-ai license: mit datasets: - Salesforce/xlam-function-calling-60k language: - en base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B pipeline_tag: text-generation --- # Medical Function-Calling LLM (Fine-tuned DeepSeek-R1-Distill-Qwen-1.5B) This model is a fine-tuned version of **[deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)**, specialized for **medical domain function-calling** tasks. It is trained on **[Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k)** to reliably produce structured JSON outputs for healthcare applications such as appointment booking, medical record retrieval, patient communication, and medical triage support. --- ## Model Details - **Developed by:** Alton Lavin D’Souza - **Funded by:** Self-funded - **Model type:** Instruction-tuned causal language model with function-calling capabilities - **Language(s):** English - **License:** MIT - **Finetuned from model:** [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) ### Model Sources - **Repository:** [GitHub – Medical Function Calling LLM](https://github.com/) - **Base model card:** [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) --- ## Uses ### Direct Use - Conversational AI assistants in healthcare - Automated structured response generation in JSON - Integration with electronic health record (EHR) systems - Medical workflow automation (e.g., booking appointments, retrieving patient data) ### Downstream Use - Fine-tuning for specific healthcare specialties - Integration into clinical decision support systems - Agent-based medical AI systems with tool use ### Out-of-Scope Use - Direct diagnosis without human oversight - Emergency medical response without clinician involvement - General-purpose non-medical applications (may work but not optimized) --- ## Bias, Risks, and Limitations This model may: - Hallucinate medical facts if prompted outside its training scope - Produce incomplete JSON structures if instructions are ambiguous - Require strict validation before integration into real-world healthcare systems **⚠️ Important:** This model is **not** a substitute for a licensed medical professional. --- ## Tool Calling Example: ```python import json import os import pickle import time from datetime import datetime, timedelta ,time as time_1 from threading import Thread from typing import TypedDict, Dict, List, Any from urllib.request import Request import pytz import torch from duckduckgo_search import DDGS from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from langchain_community.tools import TavilySearchResults from langgraph.constants import START, END from langgraph.graph import StateGraph from regex import regex, search from smolagents import DuckDuckGoSearchTool from sympy.physics.units.definitions.dimension_definitions import information from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer from dotenv import load_dotenv from tzlocal import get_localzone load_dotenv() torch.manual_seed(11) model_name = "aldsouza/health-agent" pattern = r''' \{ # Opening brace of the function block \s*"name"\s*:\s*"([^"]+)"\s*, # Capture the function name \s*"arguments"\s*:\s*(\{ # Capture the arguments JSON object starting brace (?:[^{}]++ | (?2))*? # Recursive matching for balanced braces (PCRE syntax) \}) # Closing brace of arguments \s*\} # Closing brace of the function block ''' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to("cuda") # model_1 = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",torch_dtype=torch.float16).to("cuda") medical_tools = [ { "name": "symptom_checker", "description": "Analyze symptoms and provide possible conditions.", "parameters": { "symptoms": { "description": "List of symptoms reported by the patient.", "type": "list[str]", "default": ["headache", "fever"] } } }, { "name": "medication_lookup", "description": "Look up details about a medication by its name.", "parameters": { "medication_name": { "description": "Name of the medication to look up.", "type": "str", "default": "Aspirin" } } }, { "name": "book_appointment", "description": "Schedule a medical appointment with a doctor.", "parameters": { "patient_name": { "description": "Name of the patient.", "type": "str", "default": "John Doe" }, "doctor_specialty": { "description": "Specialty of the doctor to book.", "type": "str", "default": "general practitioner" }, "date": { "description": "Preferred date of appointment (YYYY-MM-DD).", "type": "str", "default": "2025-08-20" } } }, { "name": "get_lab_results", "description": "Retrieve lab test results for a patient by test ID.", "parameters": { "patient_id": { "description": "Unique patient identifier.", "type": "str", "default": "123456" }, "test_id": { "description": "Lab test identifier.", "type": "str", "default": "cbc" } } }, { "name": "request_missing_info", "description": "Ask the user for missing or incomplete information needed to fulfill their request.", "parameters": { "missing_fields": { "description": "List of missing required fields to be clarified by the user.", "type": "list[str]", "default": [] }, "context": { "description": "Optional context or explanation to help the user provide the missing information.", "type": "str", "default": "" } } }, { "name": "medical_device_info", "description": "Retrieve detailed information about a medical device by its name or model number.", "parameters": { "device_name": { "description": "The name or model number of the medical device to look up.", "type": "str", "default": "Blood Pressure Monitor" } } }, { "name": "record_blood_pressure", "description": "Record a patient's blood pressure reading with systolic, diastolic, and pulse rate values.", "parameters": { "patient_id": { "description": "Unique identifier of the patient.", "type": "str", "default": "123456" }, "systolic": { "description": "Systolic blood pressure value (mmHg).", "type": "int", "default": 120 }, "diastolic": { "description": "Diastolic blood pressure value (mmHg).", "type": "int", "default": 80 }, "pulse_rate": { "description": "Pulse rate in beats per minute.", "type": "int", "default": 70 }, "measurement_time": { "description": "Timestamp of the measurement (YYYY-MM-DD HH:MM).", "type": "str", "default": "2025-08-12 09:00" } } }, { "name": "start_blood_pressure_test", "description": "Initiate a blood pressure measurement test for a patient using a connected device.", "parameters": { "patient_id": { "description": "Unique identifier of the patient.", "type": "str", "default": "123456" }, "device_id": { "description": "Identifier or model of the blood pressure measuring device.", "type": "str", "default": "BP-Device-001" } } } ] # Compose the system prompt embedding the tools JSON system_prompt = f""" You are an intelligent AI assistant that uses available tools (functions) to help users achieve their medical-related goals. Your job is to understand the user's intent, identify missing information if needed, and then select and call the most appropriate function(s) to solve the task. # Rules: - ALWAYS use the tools provided to answer the user's request, unless explicitly told not to. - Ask clarifying questions ONLY if the user's request is ambiguous or lacks required input parameters. - If multiple tools are needed, use them in sequence. - DO NOT make up data or assume values β€” request any missing input clearly. # Output Format: - Respond using a JSON list of function calls in the following format: [ {{ "name": "function_name", "arguments": {{ "param1": "value1", "param2": "value2" }} ] - Only include the functions needed to complete the task. - If no function is needed or the input is unclear, ask a clarifying question instead of guessing. - Do NOT respond with explanations or natural language outside the JSON block unless explicitly instructed. Following are the tools provided to you: {json.dumps(medical_tools, indent=2)} """ SCOPES = ['https://www.googleapis.com/auth/calendar'] def symptom_checker(kwargs): print(f"Checking diseases for following symptoms on the web:") symptoms = kwargs.get("symptoms",[]) print(symptoms) for i, arg in enumerate(symptoms): print(f"{i}. {arg}") results = TavilySearchResults() information = "" for result in results.invoke(f"What causes {''.join(symptoms)}"): information = information + result["content"] + "\n" return { "status":200, "message":information } def medication_lookup(kwargs): medication_name = kwargs.get("medication_name") print(f"Looking up the web for information on {medication_name}....") results = TavilySearchResults() information = "" for result in results.invoke(f"What is {medication_name}?"): information = information + result["content"] + "\n" return { "status": 200, "message": information } def create_google_calendar_meeting( summary: str, start_datetime: str, end_datetime: str, attendees_emails: list, timezone: str = 'America/Chicago' ): """ Creates a Google Calendar event. Args: summary (str): Event title. start_datetime (str): Start datetime in ISO format, e.g., "2025-08-18T10:00:00-06:00". end_datetime (str): End datetime in ISO format. attendees_emails (list): List of attendee emails. timezone (str): Timezone string, default 'America/Chicago'. """ creds = None # Load saved credentials if available if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) # Authenticate if necessary if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES) creds = flow.run_local_server(port=0) with open('token.pickle', 'wb') as token: pickle.dump(creds, token) service = build('calendar', 'v3', credentials=creds) event = { 'summary': summary, 'location': 'Virtual / Google Meet', 'description': f'{summary} meeting.', 'start': {'dateTime': start_datetime, 'timeZone': timezone}, 'end': {'dateTime': end_datetime, 'timeZone': timezone}, 'attendees': [{'email': email} for email in attendees_emails], 'reminders': {'useDefault': True}, } created_event = service.events().insert( calendarId='primary', body=event, sendUpdates='all' ).execute() print(f"Event created: {created_event.get('htmlLink')}") return created_event def book_appointment(kwargs): patient_name = kwargs.get("patient_name") doctor_specialty = kwargs.get("doctor_specialty") date_str = kwargs.get("date") parsed_date = datetime.strptime(date_str, "%Y-%m-%d").date() # Default time 9:00 AM Mountain Time mountain_tz = pytz.timezone("America/Denver") dt_mt = datetime.combine(parsed_date, time_1(9, 0)) dt_mt = mountain_tz.localize(dt_mt) # Autodetect local timezone local_tz = get_localzone() dt_local = dt_mt.astimezone(local_tz) dt_local_end = dt_local + timedelta(hours=1) result = create_google_calendar_meeting( f"Meeting for {patient_name}", dt_local.isoformat(), dt_local_end.isoformat(), ["altondsouza02@gmail.com", "aldsouza@ualberta.ca"] ) return { "status":200, "message": f"Event Created:{result}" } function_execution_map = { "symptom_checker": symptom_checker, "medication_lookup": medication_lookup, "book_appointment": book_appointment } # Example prompt using the medical tools # messages = [ # { # "content": system_prompt, # "role": "system" # }, # { # "content": ( # "I have a headache and mild fever. What could be the possible conditions? " # "Also, lookup medication details for 'Ibuprofen'. " # "Please book an appointment for patient 'Alice Smith' with a neurologist on 2025-09-01." # ), # "role": "user" # } # ] # streamer = TextStreamer(tokenizer, skip_prompt=True) # streamer = TextIteratorStreamer(tokenizer, skip_prompt=True) # inputs = tokenizer.apply_chat_template( # messages, # add_generation_prompt=True, # tokenize=True, # return_dict=True, # return_tensors="pt", # ).to(model.device) # inputs = tokenizer.apply_chat_template( # messages, # add_generation_prompt=True, # tokenize=True, # return_dict=True, # return_tensors="pt", # ).to(mo) # generation_kwargs = dict(inputs,streamer=streamer, # max_new_tokens=4096, # temperature=0.7,) # thread = Thread(target=model.generate, kwargs=generation_kwargs,daemon=True) # thread.start() # for new_text in streamer: # print(new_text, end="") # with torch.no_grad(): # outputs = model.generate( # **inputs,streamer=streamer, # max_new_tokens=4096, # temperature=0.7, # ) class State(TypedDict): messages: List[Dict[str, Any]] plan: List[Dict[str, Any]] task: str graph_builder = StateGraph(State) PLANNING_AGENT = "PLANNING_AGENT" def planning(state: State): print("Coming up with Plan") messages = state.get("messages", []) inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) streamer = TextIteratorStreamer(tokenizer, skip_prompt=True) generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=4096, temperature=0.7, ) thread = Thread(target=model.generate, kwargs=generation_kwargs, daemon=True) thread.start() generated_text = "" for new_text in streamer: print(new_text, end="") generated_text = generated_text + new_text generated_text = generated_text.replace("<|end▁of▁sentence|>","").replace("</think>","") matches = regex.findall(pattern, generated_text, regex.VERBOSE) plan = state.get("plan", []) for i, (func_name, args_json) in enumerate(matches, 1): plan_entry = dict() plan_entry["function_name"] = func_name plan_entry["arguments"] = json.loads(args_json) plan.append(plan_entry) messages.append({"role": "assistant", "content": generated_text}) return {"messages":messages, "plan": plan} ROUTER = "ROUTER" def router(state: State): plan = state.get("plan", []) if len(plan) > 0: return "execute_plan" return "respond" def execute_plan(state: State): print("Executing") plan = state.get("plan", []) for plan_entry in plan: plan_entry["status"] = dict() print(f"Executing {plan_entry['function_name']} with details {plan_entry['arguments']}") print("Approve Execution?(y/n)") response = input() response = response.strip().lower() if response == "y": print("Approved.") if plan_entry["function_name"] in function_execution_map.keys(): function = function_execution_map[plan_entry["function_name"]] result = function(plan_entry["arguments"]) plan_entry["status"] = result else: print(f"Capability not implemented for {plan_entry['function_name']}") print("Done with task.") print("Proceeding with next.") elif response == "n": print("Not approved.") else: print("Invalid input, please enter 'y' or 'n'.") return {"plan": plan} def respond(state: State): print(state.get("messages")[-1]["content"]) return {"plan": state.get("plan")} def summarize(state: State): plan = state.get("plan") messages = state.get("messages") summary_prompt = [] summary_prompt.append({ "role": "user","content": f"Summarize the results obtained from the following tool executions:\n {json.dumps(plan,indent=2)}" }) inputs = tokenizer.apply_chat_template( summary_prompt, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) streamer = TextIteratorStreamer(tokenizer, skip_prompt=True) generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=4096, temperature=0.7, ) thread = Thread(target=model.generate, kwargs=generation_kwargs, daemon=True) thread.start() generated_text = "" for new_text in streamer: print(new_text, end="") generated_text = generated_text + new_text messages.append({"role": "assistant", "content": generated_text}) return {"messages":messages} EXECUTE_PLAN = "EXECUTE_PLAN" RESPOND = "RESPOND" SUMMARIZE = "SUMMARIZE" graph_builder.add_node(PLANNING_AGENT, planning) graph_builder.add_node(EXECUTE_PLAN, execute_plan) graph_builder.add_node(RESPOND, respond) graph_builder.add_node(SUMMARIZE, summarize) graph_builder.add_edge(START, PLANNING_AGENT) graph_builder.add_conditional_edges(PLANNING_AGENT, router, { "execute_plan": EXECUTE_PLAN, "respond": RESPOND }) graph_builder.add_edge(EXECUTE_PLAN, SUMMARIZE) graph_builder.add_edge(SUMMARIZE, RESPOND) graph_builder.add_edge(RESPOND, END) compiled_graph = graph_builder.compile() png_bytes = compiled_graph.get_graph().draw_mermaid_png() # Save to file with open("graph.png", "wb") as f: f.write(png_bytes) print("Graph saved as graph.png") messages = [ { "content": system_prompt, "role": "system" }, { "content": ( "I have a headache and mild fever. What could be the possible conditions? " "Also, lookup medication details for 'Ibuprofen'. " "Please book an appointment for patient 'Alice Smith' with a neurologist on 2025-08-18." ), "role": "user" } ] different_user_prompt = [ { "content": system_prompt, "role": "system" }, { "content": ( "My mother has chest pain and shortness of breath. " "Can you analyze her symptoms? " "Also, please look up information about 'Nitroglycerin' medication. " "Finally, get lab results for patient ID '987654' for the test 'lipid_panel'." ), "role": "user" } ] compiled_graph.invoke({"messages": messages}) # compiled_graph.invoke({"messages": different_user_prompt}) ``` ## Requirements ```python accelerate==1.9.0 aiohappyeyeballs==2.6.1 aiohttp==3.12.15 aiosignal==1.4.0 annotated-types==0.7.0 anyio==4.10.0 attrs==25.3.0 auto_gptq==0.7.1 autolab-core==1.1.1 beautifulsoup4==4.13.4 bitsandbytes==0.46.1 cachetools==5.5.2 certifi==2025.7.14 charset-normalizer==3.4.2 click==8.2.1 colorama==0.4.6 colorlog==6.9.0 contourpy==1.3.3 cycler==0.12.1 dataclasses-json==0.6.7 datasets==4.0.0 dateparser==1.2.2 ddgs==9.5.4 dill==0.3.8 dotenv==0.9.9 duckduckgo_search==8.1.1 duckling==1.8.0 filelock==3.13.1 fonttools==4.59.0 freetype-py==2.5.1 frozenlist==1.7.0 fsspec==2024.6.1 gekko==1.3.0 google-api-core==2.25.1 google-api-python-client==2.179.0 google-auth==2.40.3 google-auth-httplib2==0.2.0 google-auth-oauthlib==1.2.2 googleapis-common-protos==1.70.0 greenlet==3.2.4 h11==0.16.0 hf-xet==1.1.7 httpcore==1.0.9 httplib2==0.22.0 httpx==0.28.1 httpx-sse==0.4.1 huggingface-hub==0.34.3 idna==3.10 imageio==2.37.0 Jinja2==3.1.4 joblib==1.5.1 jpype1==1.6.0 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.25.0 jsonschema-specifications==2025.4.1 kiwisolver==1.4.8 langchain==0.3.27 langchain-community==0.3.27 langchain-core==0.3.74 langchain-huggingface==0.3.1 langchain-text-splitters==0.3.9 langgraph==0.6.5 langgraph-checkpoint==2.1.1 langgraph-prebuilt==0.6.4 langgraph-sdk==0.2.0 langsmith==0.4.14 lazy_loader==0.4 lxml==6.0.0 manifold3d==3.2.1 mapbox_earcut==1.0.3 markdown-it-py==3.0.0 markdownify==1.1.0 MarkupSafe==2.1.5 marshmallow==3.26.1 matplotlib==3.10.5 mdurl==0.1.2 mpmath==1.3.0 multidict==6.6.3 multiprocess==0.70.16 mypy_extensions==1.1.0 networkx==3.3 numpy==2.1.2 oauthlib==3.3.1 opencv-python==4.12.0.88 optimum==1.27.0 orjson==3.11.2 ormsgpack==1.10.0 packaging==25.0 pandas==2.3.1 peft==0.17.0 pillow==11.0.0 primp==0.15.0 propcache==0.3.2 proto-plus==1.26.1 protobuf==6.32.0 psutil==7.0.0 pyarrow==21.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycollada==0.9.2 pydantic==2.11.7 pydantic-settings==2.10.1 pydantic_core==2.33.2 pyglet==2.1.8 Pygments==2.19.2 PyOpenGL==3.1.0 pyparsing==3.2.3 pyreadline==2.1 pyrender==0.1.45 python-dateutil==2.9.0.post0 python-dotenv==1.1.1 pytz==2025.2 PyYAML==6.0.2 referencing==0.36.2 regex==2025.7.34 requests==2.32.4 requests-oauthlib==2.0.0 requests-toolbelt==1.0.0 rich==14.1.0 rouge==1.0.1 rpds-py==0.27.0 rsa==4.9.1 rtree==1.4.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 safetensors==0.5.3 scikit-image==0.25.2 scikit-learn==1.7.1 scipy==1.16.1 sentencepiece==0.2.1 setproctitle==1.3.6 shapely==2.1.1 six==1.17.0 smolagents==1.20.0 sniffio==1.3.1 soupsieve==2.7 SQLAlchemy==2.0.43 svg.path==7.0 sympy==1.13.3 tenacity==9.1.2 threadpoolctl==3.6.0 tifffile==2025.6.11 tokenizers==0.21.4 torch==2.7.1+cu126 torchaudio==2.7.1+cu126 torchvision==0.22.1+cu126 tqdm==4.67.1 transformers==4.54.1 trimesh==4.7.4 trl==0.20.0 typing-inspect==0.9.0 typing-inspection==0.4.1 typing_extensions==4.14.1 tzdata==2025.2 tzlocal==5.3.1 uritemplate==4.2.0 urllib3==2.5.0 vhacdx==0.0.8.post2 visualization==1.0.0 xxhash==3.5.0 yarl==1.20.1 zstandard==0.24.0 ``` ## How to Get Started ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch import json torch.manual_seed(42) model_name = "aldsouza/health-agent" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to("cuda") medical_tools = [ { "name": "symptom_checker", "description": "Analyze symptoms and provide possible conditions.", "parameters": { "symptoms": { "description": "List of symptoms reported by the patient.", "type": "list[str]", "default": ["headache", "fever"] } } }, { "name": "medication_lookup", "description": "Look up details about a medication by its name.", "parameters": { "medication_name": { "description": "Name of the medication to look up.", "type": "str", "default": "Aspirin" } } }, { "name": "book_appointment", "description": "Schedule a medical appointment with a doctor.", "parameters": { "patient_name": { "description": "Name of the patient.", "type": "str", "default": "John Doe" }, "doctor_specialty": { "description": "Specialty of the doctor to book.", "type": "str", "default": "general practitioner" }, "date": { "description": "Preferred date of appointment (YYYY-MM-DD).", "type": "str", "default": "2025-08-20" } } }, { "name": "get_lab_results", "description": "Retrieve lab test results for a patient by test ID.", "parameters": { "patient_id": { "description": "Unique patient identifier.", "type": "str", "default": "123456" }, "test_id": { "description": "Lab test identifier.", "type": "str", "default": "cbc" } } }, { "name": "request_missing_info", "description": "Ask the user for missing or incomplete information needed to fulfill their request.", "parameters": { "missing_fields": { "description": "List of missing required fields to be clarified by the user.", "type": "list[str]", "default": [] }, "context": { "description": "Optional context or explanation to help the user provide the missing information.", "type": "str", "default": "" } } }, { "name": "medical_device_info", "description": "Retrieve detailed information about a medical device by its name or model number.", "parameters": { "device_name": { "description": "The name or model number of the medical device to look up.", "type": "str", "default": "Blood Pressure Monitor" } } }, { "name": "record_blood_pressure", "description": "Record a patient's blood pressure reading with systolic, diastolic, and pulse rate values.", "parameters": { "patient_id": { "description": "Unique identifier of the patient.", "type": "str", "default": "123456" }, "systolic": { "description": "Systolic blood pressure value (mmHg).", "type": "int", "default": 120 }, "diastolic": { "description": "Diastolic blood pressure value (mmHg).", "type": "int", "default": 80 }, "pulse_rate": { "description": "Pulse rate in beats per minute.", "type": "int", "default": 70 }, "measurement_time": { "description": "Timestamp of the measurement (YYYY-MM-DD HH:MM).", "type": "str", "default": "2025-08-12 09:00" } } }, { "name": "start_blood_pressure_test", "description": "Initiate a blood pressure measurement test for a patient using a connected device.", "parameters": { "patient_id": { "description": "Unique identifier of the patient.", "type": "str", "default": "123456" }, "device_id": { "description": "Identifier or model of the blood pressure measuring device.", "type": "str", "default": "BP-Device-001" } } } ] # Compose the system prompt embedding the tools JSON system_prompt = f""" You are an intelligent AI assistant that uses available tools (functions) to help users achieve their medical-related goals. Your job is to understand the user's intent, identify missing information if needed, and then select and call the most appropriate function(s) to solve the task. # Rules: - ALWAYS use the tools provided to answer the user's request, unless explicitly told not to. - Ask clarifying questions ONLY if the user's request is ambiguous or lacks required input parameters. - If multiple tools are needed, use them in sequence. - DO NOT make up data or assume values β€” request any missing input clearly. # Output Format: - Respond using a JSON list of function calls in the following format: [ {{ "name": "function_name", "arguments": {{ "param1": "value1", "param2": "value2" }} ] - Only include the functions needed to complete the task. - If no function is needed or the input is unclear, ask a clarifying question instead of guessing. - Do NOT respond with explanations or natural language outside the JSON block unless explicitly instructed. Following are the tools provided to you: {json.dumps(medical_tools, indent=2)} """ # Example prompt using the medical tools messages = [ { "content": system_prompt, "role": "system" }, { "content": ( "I have a headache and mild fever. What could be the possible conditions? " "Also, lookup medication details for 'Ibuprofen'. " "Please book an appointment for patient 'Alice Smith' with a neurologist on 2025-09-01." ), "role": "user" } ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=4096, temperature=0.7, ) response = tokenizer.decode(outputs[0]) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755612309
Vasya777
2025-08-19T14:06:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering enormous sloth", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:05:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering enormous sloth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
alok0777/blockassist-bc-masked_pensive_lemur_1755612218
alok0777
2025-08-19T14:05:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked pensive lemur", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:04:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked pensive lemur --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/flux-xl-_-bailing-xl_magic-array
Muapi
2025-08-19T14:02:23Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:01:54Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # η™½ζ£±(FLUX,XL)_ι­”ζ³•ι˜΅-bailing XL_Magic Array ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: bailing_magic_circle ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:359982@887667", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/f1-charturn-multi-view-turnaround-model-sheet-character-design
Muapi
2025-08-19T14:01:24Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:01:11Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # F1 CharTurn, Multi-view, Turnaround, Model Sheet, Character design ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:784830@877675", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Kurosawama/Llama-3.2-3B-Translation-align
Kurosawama
2025-08-19T14:00:59Z
0
0
transformers
[ "transformers", "safetensors", "trl", "dpo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T06:10:57Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755610312
katanyasekolah
2025-08-19T14:00:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky sprightly cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:00:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky sprightly cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/1980-s-style-xl-f1d
Muapi
2025-08-19T14:00:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T14:00:26Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # 1980's style XL + F1D ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: 1980 style ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:376914@894083", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
thanobidex/blockassist-bc-colorful_shiny_hare_1755610389
thanobidex
2025-08-19T14:00:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T14:00:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rievil/crackenpy
rievil
2025-08-19T13:59:57Z
0
2
timm
[ "timm", "image-segmentation", "doi:10.57967/hf/3295", "license:bsd", "region:us" ]
image-segmentation
2024-10-01T09:54:22Z
--- license: bsd pipeline_tag: image-segmentation architecture: resnext101_32x8d base_model: - timm/resnext101_32x8d library_name: timm metrics: - accuracy - mean intersection of union --- # Pre-trained model for CrackenPy package for crack segmentation on building material specimens The repository contains pre-trained models using the segmentation-models-pytorch package to segment RGB images 416x416 pixels. The resulting classes are "background," "matrix," "crack," and "pore".The purpose is a segmentation of test specimens made from building materials such as cement, alkali-activated materials or geopolymers. ### Model Description - **Model type:** semantic segmentation - **Language(s) (NLP):** Python - **License:** BSD v2 - **Finetuned from model [optional]:** resnet101 ## Uses The use is to segment cracks on test specimens or on images fully filled with a binder matrix containing cracks. The background should be darker than the speicmen itself. The segmentation is aimed at fine cracks from starting from 20 um up to 10 mm. ## Bias, Risks, and Limitations The background and matrix classes may sometimes be with, if the texture of specimens is too dark or smudged, it is, therefore, important to make a segmentation on possible clean specimens. The models of the current version have not been trained in exterior and may lead to bad segmentation. The pores are usually in circular shape, but there can be a situation where a crack is found on the edge of the pore. It is therefore recommended to avoid the usage of models on highly porous materials. ## Training Details The models originate from https://github.com/qubvel-org/segmentation_models.pytorch library, and are retrained on dataset crackenpy_dataset. ### Training Data The dataset for training can be downloaded from Brno University of Technology upon filling the form. The dataset is free for use in ressearch and education area under the BSD v2 license. The dataset was created under the research project of Grant Agency of Czech Republic No. 22-02098S with the title: "Experimental analysis of the shrinkage, creep and cracking mechanism of the materials based on the alkali-activated slag". ### Training Procedure The training was done using Pytorch library, where CrossEntropyLoss() together with AdamW optimizer function. ### Results & Metrics The dataset has 1207 images in resolution of 416x416 pixels together with 1207 masks. The overall accuracy of training for all classes reaches 98%, the mean intersection of union reaches 73%. #### Hardware The training was done using NVIDIA Quadro P4000 with CUDA support. #### Software The models were trained using Pytorch in Python, the segmentation and dataset preparation was done using LabKit plugin in software FIJI. ## Authors of dataset Richard Dvorak, Brno University of Technology, Faculty of Civil Engineering, Institute of building testing Rostislav Krc, Brno University of Technology, Faculty of Civil Engineering, Institute of building testing Vlastimil Bilek, Brno University of Technology, Faculty of Chemistry, Institute of material chemistry Barbara KucharczykovΓ‘, Brno University of Technology, Faculty of Civil Engineering, Institute of building testing ## Citation The model was trained on the CrackenPy Dataset, and is used in CrackenPy library: - [Library](https://github.com/Rievil/CrackenPy) - [Model](https://huggingface.co/rievil/crackenpy) - [Dataset](https://huggingface.co/datasets/rievil/crackenpy_dataset) If you use this model, please cite our work: ```tex @misc {richard_dvorak_2024, author = { {Richard Dvorak} }, title = { crackenpy (Revision 04ed02c) }, year = 2024, url = { https://huggingface.co/rievil/crackenpy }, doi = { 10.57967/hf/3295 }, publisher = { Hugging Face } } @software {Dvorak_CrackenPy_Image_segmentation_2024, author = {Dvorak, Richard and Bilek, Vlastimil and Krc, Rostislav and Kucharczykova, Barbara}, doi = {10.5281/zenodo.13969747}, month = oct, title = {{CrackenPy: Image segmentation tool for semantic segmentation of building material surfaces using deep learning}}, url = {https://github.com/Rievil/CrackenPy}, year = {2024} } @misc {richard_dvorak_2024, author = { {Richard Dvorak} }, title = { crackenpy_dataset (Revision ce5c857) }, year = 2024, url = { https://huggingface.co/datasets/rievil/crackenpy_dataset }, doi = { 10.57967/hf/3496 }, publisher = { Hugging Face } } ``` ## Model Card Contact The author of the dataset is Richard Dvorak Ph.D., richard.dvorak@vutbr.cz, tel.: +420 777 678 613, employee of Brno University Of Technology of the Faculty of Civil Engineering, institute of building testing.
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1755609800
michaelcpage345
2025-08-19T13:56:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature deadly anteater", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:56:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature deadly anteater --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/radiant-realism-pro-realistic-makeup-skin-texture-skin-color-flux.1d
Muapi
2025-08-19T13:56:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:56:23Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Radiant Realism Pro (Realistic, Makeup, Skin Texture, Skin Color) Flux.1D ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:970421@1086588", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/neon-cyberpunk-animals-flux-sdxl
Muapi
2025-08-19T13:56:12Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:56:03Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Neon Cyberpunk - Animals FLUX & SDXL ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: mad-cybranmls, cybernetic parts, mechanical parts ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:281944@1067893", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755610110
helmutsukocok
2025-08-19T13:54:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:54:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/flux-sdxl-black-diamonds
Muapi
2025-08-19T13:54:12Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:53:57Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # [Flux/SDXL] - πŸ–€ Black Diamonds πŸ–€ ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: made out of black diamonds, black diamonds ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:607623@740146", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
unitova/blockassist-bc-zealous_sneaky_raven_1755610013
unitova
2025-08-19T13:53:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:53:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/alejandro-jodorowsky-style
Muapi
2025-08-19T13:53:43Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:53:34Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Alejandro Jodorowsky Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Alejandro Jodorowsky Style ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:62712@1403331", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/aerith-gainsborough-final-fantasy-vii
Muapi
2025-08-19T13:52:58Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:52:47Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Aerith Gainsborough - Final Fantasy VII ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: aerith ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:168676@782206", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
hug-mono/checkworthy-binary-classification-training-1755585731
hug-mono
2025-08-19T13:51:55Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:google-bert/bert-base-uncased", "lora", "transformers", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-08-19T13:51:51Z
--- library_name: peft license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - base_model:adapter:google-bert/bert-base-uncased - lora - transformers model-index: - name: checkworthy-binary-classification-training-1755585731 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkworthy-binary-classification-training-1755585731 This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.1106713456200193e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9348819720458172,0.9285998615546803) and epsilon=1.9972958061508847e-07 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: polynomial - lr_scheduler_warmup_ratio: 0.12890328790683203 - lr_scheduler_warmup_steps: 488 - num_epochs: 40 ### Training results ### Framework versions - PEFT 0.17.0 - Transformers 4.55.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
alok0777/blockassist-bc-masked_pensive_lemur_1755611305
alok0777
2025-08-19T13:50:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked pensive lemur", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:49:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked pensive lemur --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/the-deep-abyss-flux
Muapi
2025-08-19T13:50:48Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T13:50:36Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # The Deep Abyss FLUX ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: 4byss ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:930359@1041413", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755609650
hakimjustbao
2025-08-19T13:47:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:47:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1755609526
chainway9
2025-08-19T13:46:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:46:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
WangChongan/rl-Pixelcopter-PLE-v0
WangChongan
2025-08-19T13:43:56Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-08-19T12:57:38Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: rl-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 25.30 +/- 17.57 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
meme46/lora-financialqa
meme46
2025-08-19T13:42:06Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T13:41:53Z
--- base_model: unsloth/llama-3-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** meme46 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lilTAT/blockassist-bc-gentle_rugged_hare_1755610875
lilTAT
2025-08-19T13:41:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle rugged hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-19T13:41:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle rugged hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).