modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-22 00:40:10
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 517
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-22 00:39:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Omega-Qwen2.5-Coder-3B-GGUF
|
mradermacher
| 2025-08-20T14:46:02Z | 304 | 1 |
transformers
|
[
"transformers",
"gguf",
"Thinking: Disabled",
"Forge",
"code",
"mot",
"stem",
"coder",
"trl",
"en",
"zh",
"dataset:prithivMLmods/Open-Omega-Forge-1M",
"base_model:prithivMLmods/Omega-Qwen2.5-Coder-3B",
"base_model:quantized:prithivMLmods/Omega-Qwen2.5-Coder-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-16T07:01:05Z |
---
base_model: prithivMLmods/Omega-Qwen2.5-Coder-3B
datasets:
- prithivMLmods/Open-Omega-Forge-1M
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- 'Thinking: Disabled'
- Forge
- code
- mot
- stem
- coder
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/Omega-Qwen2.5-Coder-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Omega-Qwen2.5-Coder-3B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jo-mengr/mmcontext-pubmedbert-gs-100k
|
jo-mengr
| 2025-08-20T14:44:28Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:81143",
"loss:MultipleNegativesRankingLoss",
"code",
"dataset:jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:NeuML/pubmedbert-base-embeddings",
"base_model:finetune:NeuML/pubmedbert-base-embeddings",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-20T14:44:10Z |
---
language:
- code
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:81143
- loss:MultipleNegativesRankingLoss
base_model: NeuML/pubmedbert-base-embeddings
widget:
- source_sentence: sample_idx:census_d7d7e89c-c93a-422d-8958-9b4a90b69558_1563
sentences:
- This measurement was conducted with 10x 5' v1. Naive B cell from blood of a 26-year
old male, activated with CD3.
- sample_idx:census_d7d7e89c-c93a-422d-8958-9b4a90b69558_5036
- This measurement was conducted with 10x 5' v1. A 26-year-old male individual's
blood sample, containing naive thymus-derived CD4-positive, alpha-beta T cells,
with no activation or treatment, and in G1 phase.
- source_sentence: sample_idx:census_cf83c98a-3791-4537-bbde-a719f6d73c13_738
sentences:
- This measurement was conducted with 10x 3' v3. Blasts cells derived from the blood
of a 4-month old male.
- sample_idx:census_cf83c98a-3791-4537-bbde-a719f6d73c13_1016
- This measurement was conducted with 10x 3' v3. This is a megakaryocyte-erythroid
progenitor cell (MEP-like) derived from a 1-month-old female patient with KMT2A-rearranged
(KMT2A-r) infant acute lymphoblastic leukemia (ALL). The cell exhibits increased
lineage plasticity, downregulated steroid response pathways, and belongs to a
hematopoietic stem and progenitor-like (HSPC-like) population that forms an immunosuppressive
signaling circuit with cytotoxic lymphocytes.
- source_sentence: sample_idx:census_2872f4b0-b171-46e2-abc6-befcf6de6306_2050
sentences:
- sample_idx:census_2872f4b0-b171-46e2-abc6-befcf6de6306_1719
- This measurement was conducted with 10x 5' v2. Memory B cell derived from a 65-79
year-old male, taken from the mesenteric lymph node.
- This measurement was conducted with 10x 5' v2. IgA plasma cell sample taken from
the mesenteric lymph node of a 65-79 year-old female.
- source_sentence: sample_idx:census_3f31f8ce-bbf6-4df8-8203-aa240ed03026_299
sentences:
- This measurement was conducted with 10x 3' v3. Neuron cell type from a 50-year-old
male human cerebral cortex, specifically from the Cingulate gyrus, rostral (CgGr),
Ventral division of MFC - A24 region, with European self-reported ethnicity, analyzed
at the nucleus level.
- This measurement was conducted with 10x 3' v3. Neuron cell type from a 50-year-old
male human cerebral cortex, specifically the rostral cingulate gyrus, ventral
division of MFC, A24, with European ethnicity.
- sample_idx:census_3f31f8ce-bbf6-4df8-8203-aa240ed03026_30
- source_sentence: sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_14644
sentences:
- sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_16130
- This measurement was conducted with 10x 3' v3. Classical monocytes derived from
the blood of a female individual in her seventies.
- This measurement was conducted with 10x 5' v2. Sample is a CD8-positive, alpha-beta
memory T cell, specifically a cytotoxic T cell, from the lamina propria tissue
of an individual in her eighth decade of life.
datasets:
- jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on NeuML/pubmedbert-base-embeddings
results:
- task:
type: triplet
name: Triplet
dataset:
name: cellxgene pseudo bulk 100k multiplets natural language annotation cell
sentence 1
type: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1
metrics:
- type: cosine_accuracy
value: 0.5201420783996582
name: Cosine Accuracy
---
# SentenceTransformer based on NeuML/pubmedbert-base-embeddings
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) on the [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) <!-- at revision d6eaca8254bc229f3ca42749a5510ae287eb3486 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation)
- **Language:** code
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): MMContextEncoder(
(text_encoder): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0-11): 12 x BertLayer(
(attention): BertAttention(
(self): BertSdpaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(text_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=768, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=1024, bias=True)
(3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(pooling): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-gs-100k")
# Run inference
sentences = [
'sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_14644',
"This measurement was conducted with 10x 5' v2. Sample is a CD8-positive, alpha-beta memory T cell, specifically a cytotoxic T cell, from the lamina propria tissue of an individual in her eighth decade of life.",
"This measurement was conducted with 10x 3' v3. Classical monocytes derived from the blood of a female individual in her seventies.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.2396, -0.3061],
# [-0.2396, 1.0000, 0.9110],
# [-0.3061, 0.9110, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.5201** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [9916878](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/9916878bbf20fb8f9d6a0be4c997236e027cabd4)
* Size: 81,143 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.72 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 92 characters</li><li>mean: 216.13 characters</li><li>max: 900 characters</li></ul> | <ul><li>min: 101 characters</li><li>mean: 215.14 characters</li><li>max: 870 characters</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.75 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------|
| <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_26009</code> | <code>This measurement was conducted with 10x 3' v2. A proliferating lymphocyte cell sample, obtained from a 34-year-old female Asian individual, derived from peripheral blood mononuclear cells.</code> | <code>This measurement was conducted with 10x 3' v2. Sample is a 25-year-old female with European ethnicity, having CD8-positive, alpha-beta T cell type. This cell type exhibits elevated expression of type 1 interferon-stimulated genes (ISGs) in monocytes, reduction of naรฏve CD4+ T cells correlating with monocyte ISG expression, and expansion of repertoire-restricted cytotoxic GZMH+ CD8+ T cells.</code> | <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_14165</code> |
| <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_6333</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a cell from the omentum tissue, specifically an effector memory CD4-positive, alpha-beta T cell, from a female in her sixth decade.</code> | <code>This measurement was conducted with 10x 5' v2. Conventional dendritic cell from the jejunal epithelium of a female in her eighth decade.</code> | <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_2714</code> |
| <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_271</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG).</code> | <code>This measurement was conducted with 10x 3' v3. Neuron from the thalamic complex (thalamus, posterior nuclear complex of thalamus, medial geniculate nuclei) of a 42-year-old male, identified as a midbrain-derived inhibitory neuron.</code> | <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_425</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [9916878](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/9916878bbf20fb8f9d6a0be4c997236e027cabd4)
* Size: 9,011 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.73 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 99 characters</li><li>mean: 209.99 characters</li><li>max: 941 characters</li></ul> | <ul><li>min: 102 characters</li><li>mean: 213.87 characters</li><li>max: 981 characters</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.73 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------|
| <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_490</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 43-year-old male of European ethnicity with a reported history of kidney cancer. The cell type is identified as a kidney collecting duct intercalated cell.</code> | <code>This measurement was conducted with 10x 3' v3. Kidney collecting duct intercalated cell from a 43-year old European male with kidney cancer, taken from the cortex of kidney and cryopreserved for further analysis.</code> | <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_9</code> |
| <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_269</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male cerebellum, specifically from the Cerebellar Vermis - CBV region, with European self-reported ethnicity, analyzed at the nucleus level.</code> | <code>This measurement was conducted with 10x 3' v3. Endothelial cells derived from the cerebellum (specifically, cerebellar vermis) of a 42-year-old male, classified under the vascular supercluster term.</code> | <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_923</code> |
| <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_10258</code> | <code>This measurement was conducted with 10x 5' v1. Cell sample from the tonsil of a 9-year-old female with recurrent tonsillitis, characterized as a centroblast B cell with IGLC2, IGLV7-43, IGLJ3 immunoglobulin genes expressed.</code> | <code>This measurement was conducted with 10x 5' v1. Centroblast cells derived from a 3-year-old male human tonsil sample, with obstructive sleep apnea and recurrent tonsillitis, undergoing affinity maturation and differentiation into memory or plasma cells.</code> | <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_9654</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 0.05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `bf16`: True
- `gradient_checkpointing`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | cellxgene pseudo bulk 100k multiplets natural language annotation loss | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_cosine_accuracy |
|:------:|:----:|:-------------:|:----------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------:|
| 0.3155 | 100 | 4.3275 | 23.1292 | 0.5057 |
| 0.6309 | 200 | 3.2975 | 22.5206 | 0.5042 |
| 0.9464 | 300 | 2.9622 | 19.2869 | 0.5049 |
| 1.2618 | 400 | 2.7929 | 20.7311 | 0.5031 |
| 1.5773 | 500 | 2.6904 | 17.7235 | 0.5159 |
| 1.8927 | 600 | 2.6173 | 20.9246 | 0.5122 |
| 2.2082 | 700 | 2.5566 | 21.0062 | 0.5133 |
| 2.5237 | 800 | 2.462 | 20.2036 | 0.5160 |
| 2.8391 | 900 | 2.4448 | 19.7884 | 0.5170 |
| 3.1546 | 1000 | 2.4279 | 18.8732 | 0.5196 |
| 3.4700 | 1100 | 2.4133 | 18.3714 | 0.5205 |
| 3.7855 | 1200 | 2.4083 | 18.2855 | 0.5201 |
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0.dev0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.9.0
- Datasets: 2.19.1
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755701024
|
lilTAT
| 2025-08-20T14:44:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:44:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
travelgate/Check-room_type-classifier
|
travelgate
| 2025-08-20T14:42:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T14:42:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Nous-V1-2B-GGUF
|
mradermacher
| 2025-08-20T14:42:21Z | 49 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"fr",
"pt",
"de",
"ro",
"sv",
"da",
"bg",
"ru",
"cs",
"el",
"uk",
"es",
"nl",
"sk",
"hr",
"pl",
"lt",
"nb",
"nn",
"fa",
"sl",
"gu",
"lv",
"it",
"oc",
"ne",
"mr",
"be",
"sr",
"lb",
"vec",
"as",
"cy",
"szl",
"ast",
"hne",
"awa",
"mai",
"bho",
"sd",
"ga",
"fo",
"hi",
"pa",
"bn",
"or",
"tg",
"yi",
"lmo",
"lij",
"scn",
"fur",
"sc",
"gl",
"ca",
"is",
"sq",
"li",
"prs",
"af",
"mk",
"si",
"ur",
"mag",
"bs",
"hy",
"zh",
"yue",
"my",
"ar",
"he",
"mt",
"id",
"ms",
"tl",
"ceb",
"jv",
"su",
"min",
"ban",
"pag",
"ilo",
"war",
"ta",
"te",
"kn",
"ml",
"tr",
"az",
"uz",
"kk",
"ba",
"tt",
"th",
"lo",
"fi",
"et",
"hu",
"vi",
"km",
"ja",
"ko",
"ka",
"eu",
"ht",
"pap",
"kea",
"tpi",
"sw",
"base_model:NoemaResearch/Nous-1-2B",
"base_model:quantized:NoemaResearch/Nous-1-2B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-19T03:15:23Z |
---
base_model: NoemaResearch/Nous-1-2B
language:
- en
- fr
- pt
- de
- ro
- sv
- da
- bg
- ru
- cs
- el
- uk
- es
- nl
- sk
- hr
- pl
- lt
- nb
- nn
- fa
- sl
- gu
- lv
- it
- oc
- ne
- mr
- be
- sr
- lb
- vec
- as
- cy
- szl
- ast
- hne
- awa
- mai
- bho
- sd
- ga
- fo
- hi
- pa
- bn
- or
- tg
- yi
- lmo
- lij
- scn
- fur
- sc
- gl
- ca
- is
- sq
- li
- prs
- af
- mk
- si
- ur
- mag
- bs
- hy
- zh
- yue
- my
- ar
- he
- mt
- id
- ms
- tl
- ceb
- jv
- su
- min
- ban
- pag
- ilo
- war
- ta
- te
- kn
- ml
- tr
- az
- uz
- kk
- ba
- tt
- th
- lo
- fi
- et
- hu
- vi
- km
- ja
- ko
- ka
- eu
- ht
- pap
- kea
- tpi
- sw
library_name: transformers
license: other
license_link: https://huggingface.co/apexion-ai/Nous-V1-8B/blob/main/LICENSE.md
license_name: anvdl-1.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NoemaResearch/Nous-1-2B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Nous-V1-2B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nous-V1-2B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-2B-GGUF/resolve/main/Nous-V1-2B.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dileepsathyan/my_awesome_qa_model
|
dileepsathyan
| 2025-08-20T14:42:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-20T14:31:08Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3540 |
| 2.6485 | 2.0 | 500 | 1.7377 |
| 2.6485 | 3.0 | 750 | 1.7156 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
DominikM198/UrbanFusion
|
DominikM198
| 2025-08-20T14:42:02Z | 0 | 0 | null |
[
"SpatialRepresentationLearning",
"GeoFoundationModel",
"GeoFM",
"ContrastiveLearning",
"Mutlimodal",
"any-to-any",
"dataset:DominikM198/PP2-M",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"license:cc-by-4.0",
"region:us"
] |
any-to-any
| 2025-08-20T11:34:14Z |
---
license: cc-by-4.0
datasets:
- DominikM198/PP2-M
base_model:
- openai/clip-vit-large-patch14
- BAAI/bge-small-en-v1.5
- torchgeo/vit_small_patch16_224_sentinel2_all_moco
- DominikM198/OSM-MAE
pipeline_tag: any-to-any
tags:
- SpatialRepresentationLearning
- GeoFoundationModel
- GeoFM
- ContrastiveLearning
- Mutlimodal
---
|
mradermacher/Nous-V1-8B-GGUF
|
mradermacher
| 2025-08-20T14:41:38Z | 91 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"fr",
"pt",
"de",
"ro",
"sv",
"da",
"bg",
"ru",
"cs",
"el",
"uk",
"es",
"nl",
"sk",
"hr",
"pl",
"lt",
"nb",
"nn",
"fa",
"sl",
"gu",
"lv",
"it",
"oc",
"ne",
"mr",
"be",
"sr",
"lb",
"vec",
"as",
"cy",
"szl",
"ast",
"hne",
"awa",
"mai",
"bho",
"sd",
"ga",
"fo",
"hi",
"pa",
"bn",
"or",
"tg",
"yi",
"lmo",
"lij",
"scn",
"fur",
"sc",
"gl",
"ca",
"is",
"sq",
"li",
"prs",
"af",
"mk",
"si",
"ur",
"mag",
"bs",
"hy",
"zh",
"yue",
"my",
"ar",
"he",
"mt",
"id",
"ms",
"tl",
"ceb",
"jv",
"su",
"min",
"ban",
"pag",
"ilo",
"war",
"ta",
"te",
"kn",
"ml",
"tr",
"az",
"uz",
"kk",
"ba",
"tt",
"th",
"lo",
"fi",
"et",
"hu",
"vi",
"km",
"ja",
"ko",
"ka",
"eu",
"ht",
"pap",
"kea",
"tpi",
"sw",
"base_model:NoemaResearch/Nous-1-8B",
"base_model:quantized:NoemaResearch/Nous-1-8B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-19T10:32:50Z |
---
base_model: NoemaResearch/Nous-1-8B
language:
- en
- fr
- pt
- de
- ro
- sv
- da
- bg
- ru
- cs
- el
- uk
- es
- nl
- sk
- hr
- pl
- lt
- nb
- nn
- fa
- sl
- gu
- lv
- it
- oc
- ne
- mr
- be
- sr
- lb
- vec
- as
- cy
- szl
- ast
- hne
- awa
- mai
- bho
- sd
- ga
- fo
- hi
- pa
- bn
- or
- tg
- yi
- lmo
- lij
- scn
- fur
- sc
- gl
- ca
- is
- sq
- li
- prs
- af
- mk
- si
- ur
- mag
- bs
- hy
- zh
- yue
- my
- ar
- he
- mt
- id
- ms
- tl
- ceb
- jv
- su
- min
- ban
- pag
- ilo
- war
- ta
- te
- kn
- ml
- tr
- az
- uz
- kk
- ba
- tt
- th
- lo
- fi
- et
- hu
- vi
- km
- ja
- ko
- ka
- eu
- ht
- pap
- kea
- tpi
- sw
library_name: transformers
license: other
license_link: https://huggingface.co/apexion-ai/Nous-V1-8B/blob/main/LICENSE.md
license_name: anvdl-1.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NoemaResearch/Nous-1-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Nous-V1-8B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nous-V1-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-V1-8B-GGUF/resolve/main/Nous-V1-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755699230
|
kojeklollipop
| 2025-08-20T14:40:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:40:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755699160
|
vwzyrraz7l
| 2025-08-20T14:40:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:40:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
steinunnfridriks/ScandiBERTBias
|
steinunnfridriks
| 2025-08-20T14:38:56Z | 5 | 0 | null |
[
"safetensors",
"xlm-roberta",
"bias-detection",
"icelandic",
"ner",
"socially-responsible-ai",
"prejudice-detection",
"huggingface",
"transformer",
"is",
"dataset:IceBiasNER",
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-08-15T18:01:15Z |
---
license: bigscience-openrail-m
language:
- is
tags:
- bias-detection
- icelandic
- ner
- socially-responsible-ai
- prejudice-detection
- huggingface
- transformer
datasets:
- IceBiasNER
widget:
- text: "รetta helvรญtis รบtlenska pakk..."
---
# ScandiBERT Bias-Aware NER (Icelandic)
**Trigger warning:** This model detects biased, offensive, or harmful language. Examples in this card may contain such language, included solely for research purposes.
## Model Description
This is a fine-tuned version of **ScandiBERT** for Named Entity Recognition (NER) to identify biased and potentially harmful expressions in Icelandic text.
It was trained on automatically annotated sentences covering multiple social bias categories. The covered classes are the following:
- **B-ADDICTION, I-ADDICTION**
- **B-DISABILITY, I-DISABILITY**
- **B-ORIGIN, I-ORIGIN**
- **B-GENERAL, I-GENERAL**
- **B-LGBTQIA, I-LGBTQIA**
- **B-LOOKS, I-LOOKS**
- **B-PERSONAL, I-PERSONAL**
- **B-PROFANITY, I-PROFANITY**
- **B-RELIGION, I-RELIGION**
- **B-SEXUAL, I-SEXUAL**
- **B-SOCIAL_STATUS, I-SOCIAL_STATUS**
- **B-STUPIDITY, I-STUPIDITY**
- **B-VULGAR, I-VULGAR**
- **B-WOMEN, I-WOMEN**
The model flags words or phrases belonging to these categories, producing BIO tags (e.g., `B-WOMEN`, `I-WOMEN`, `O`).
## Intended Uses & Limitations
### Intended Use
- Research on bias detection in low-resource languages
- Educational tools for raising awareness of bias in language
- Civic engagement platforms encouraging inclusive language
### Limitations
- Vocabulary-based weak supervision means some bias forms may be missed
- No sentence-level or discourse-level interpretation
- Mislabeling possible in critical, reclaimed, or journalistic contexts
โ **Not intended for punitive monitoring or censorship.** Outputs are prompts for reflection, not judgments.
## Performance
**Evaluation datasets:**
- **Test set**: 15,383 automatically annotated sentences (silver data)
- **Gold set**: 190 manually reviewed sentences
**Macro F1 performance highlights:**
- Test set: 0.978 (CI: 0.978-0.978)
- Gold set: 0.861 (CI: 0.859-0.862)
## Relevant Information
- **Base model**: [mBERT](https://huggingface.co/google-bert/bert-base-multilingual-cased)
- **Data source**: [IceBiasNER](https://huggingface.co/datasets/steinunnfridriks/IceBiasNER)
## Ethical Considerations
This model is released under the **[BigScience OpenRAIL-M License](https://www.licenses.ai/ai-licenses)**, which allows free use with responsible-use restrictions.
Prohibited uses include:
- Harassment or discrimination
- Generating disinformation or hateful content
- Surveillance targeting individuals or groups
## Citation
Will be updated.
```
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755698620
|
milliarderdol
| 2025-08-20T14:37:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:37:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nsschw/tweetllm_qwen3_mini
|
nsschw
| 2025-08-20T14:37:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T14:34:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Nous-1-8B-GGUF
|
mradermacher
| 2025-08-20T14:37:21Z | 234 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"fr",
"pt",
"de",
"ro",
"sv",
"da",
"bg",
"ru",
"cs",
"el",
"uk",
"es",
"nl",
"sk",
"hr",
"pl",
"lt",
"nb",
"nn",
"fa",
"sl",
"gu",
"lv",
"it",
"oc",
"ne",
"mr",
"be",
"sr",
"lb",
"vec",
"as",
"cy",
"szl",
"ast",
"hne",
"awa",
"mai",
"bho",
"sd",
"ga",
"fo",
"hi",
"pa",
"bn",
"or",
"tg",
"yi",
"lmo",
"lij",
"scn",
"fur",
"sc",
"gl",
"ca",
"is",
"sq",
"li",
"prs",
"af",
"mk",
"si",
"ur",
"mag",
"bs",
"hy",
"zh",
"yue",
"my",
"ar",
"he",
"mt",
"id",
"ms",
"tl",
"ceb",
"jv",
"su",
"min",
"ban",
"pag",
"ilo",
"war",
"ta",
"te",
"kn",
"ml",
"tr",
"az",
"uz",
"kk",
"ba",
"tt",
"th",
"lo",
"fi",
"et",
"hu",
"vi",
"km",
"ja",
"ko",
"ka",
"eu",
"ht",
"pap",
"kea",
"tpi",
"sw",
"base_model:NoemaResearch/Nous-1-8B",
"base_model:quantized:NoemaResearch/Nous-1-8B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-23T09:40:19Z |
---
base_model: NoemaResearch/Nous-1-8B
language:
- en
- fr
- pt
- de
- ro
- sv
- da
- bg
- ru
- cs
- el
- uk
- es
- nl
- sk
- hr
- pl
- lt
- nb
- nn
- fa
- sl
- gu
- lv
- it
- oc
- ne
- mr
- be
- sr
- lb
- vec
- as
- cy
- szl
- ast
- hne
- awa
- mai
- bho
- sd
- ga
- fo
- hi
- pa
- bn
- or
- tg
- yi
- lmo
- lij
- scn
- fur
- sc
- gl
- ca
- is
- sq
- li
- prs
- af
- mk
- si
- ur
- mag
- bs
- hy
- zh
- yue
- my
- ar
- he
- mt
- id
- ms
- tl
- ceb
- jv
- su
- min
- ban
- pag
- ilo
- war
- ta
- te
- kn
- ml
- tr
- az
- uz
- kk
- ba
- tt
- th
- lo
- fi
- et
- hu
- vi
- km
- ja
- ko
- ka
- eu
- ht
- pap
- kea
- tpi
- sw
library_name: transformers
license: other
license_link: https://huggingface.co/apexion-ai/Nous-V1-8B/blob/main/LICENSE.md
license_name: anvdl-1.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NoemaResearch/Nous-1-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Nous-1-8B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nous-1-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Nous-1-8B-GGUF/resolve/main/Nous-1-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
loyal-misc/myst
|
loyal-misc
| 2025-08-20T14:36:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:LyliaEngine/Pony_Diffusion_V6_XL",
"base_model:adapter:LyliaEngine/Pony_Diffusion_V6_XL",
"license:unlicense",
"region:us"
] |
text-to-image
| 2025-08-20T12:10:35Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/myst.png
text: '-'
base_model: LyliaEngine/Pony_Diffusion_V6_XL
instance_prompt: myst, scalie, female
license: unlicense
---
# myst
<Gallery />
## Trigger words
You should use `myst` to trigger the image generation.
You should use `scalie` to trigger the image generation.
You should use `female` to trigger the image generation.
## Download model
[Download](/loyal-misc/myst/tree/main) them in the Files & versions tab.
|
Player1444/MRF_hifigan_pretrain
|
Player1444
| 2025-08-20T14:36:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-07T17:06:51Z |
Source: https://huggingface.co/MUSTAR/temp_checkpoints/tree/main/Pretrained(rvc)/latest_Mrf_hihigan44.1_(hr)
|
Henit007/Vivekanandao1_finetuned
|
Henit007
| 2025-08-20T14:36:03Z | 182 | 0 | null |
[
"tensorboard",
"llama",
"region:us"
] | null | 2025-08-08T12:28:28Z |
# ๐ง Fine-tuned LLaMA Model using QLoRA & LoRA (Supervised Fine-Tuning)
This model is a fine-tuned version of the `model_name` base model using **QLoRA (Quantized Low-Rank Adaptation)** for efficient and memory-friendly training. Fine-tuning was performed using the Hugging Face `trl` libraryโs `SFTTrainer` and `peft` (LoRA).
---
## ๐ Model Overview
- **Base Model**: `model_name`
- **Fine-tuning Method**: QLoRA + LoRA (PEFT)
- **Task**: Causal Language Modeling
- **Quantization**: 4-bit (bitsandbytes)
- **Frameworks**: Transformers, PEFT, TRL
---
## ๐ง Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Henit007/Vivekanandao1_finetuned")
model = AutoModelForCausalLM.from_pretrained("Henit007/Vivekanandao1_finetuned", device_map="auto")
input_text = "Explain climate change in simple terms."
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05
|
joanna302
| 2025-08-20T14:36:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T09:22:02Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05/runs/daig9xq6)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_2e-05
|
joanna302
| 2025-08-20T14:35:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T08:55:27Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_2e-05
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_2e-05/runs/1czcqw7v)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755700436
|
yaelahnal
| 2025-08-20T14:35:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:34:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002
|
joanna302
| 2025-08-20T14:34:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T09:24:38Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002/runs/l27wsth5)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
roeker/blockassist-bc-quick_wiry_owl_1755700391
|
roeker
| 2025-08-20T14:33:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:33:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tusharmagar/FLUX.1-Krea-dev-LoRA-Solarpunk
|
tusharmagar
| 2025-08-20T14:33:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"en",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Krea-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-08-20T12:09:07Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
widget:
- text: >-
Solarpunk London with hexagonal solar panels and white architecture while
keeping traditional Parisian architecture with greenery and flowers and
fruiting trees and the Big Ben (unchanged) and a double decker bus on the
road [SLRPNK]
output:
url: images/London_Solarpunk.jpg
- text: >-
Aerial view of Solarpunk San Francisco with futuristic townhouses
architecture and solar sails while keeping the Golden Gate Bridge
(unchanged) a futuristic Sutro tower, flowers, and fruiting trees flowing
through hilly neighbourhoods, with a road cable car gliding along the
streets [SLRPNK]
output:
url: images/SanFrancisco_Solarpunk.jpg
- text: >-
Solarpunk Masai Mara tribe with solar panel dome greenhouses and separate
white mud houses, with flowers and fruiting trees, masai people, with a few
giraffes and elephants [SLRPNK]
output:
url: images/MasaiMara_Solarpunk.jpg
- text: >-
Solarpunk Rio de Janeiro with tropical solar sails shaped like leaves lining
the beaches, while keeping Christ the Redeemer (unchanged), flowers and
fruiting trees cascading through favelas, and futuristic white towers rising
along Copacabana [SLRPNK]
output:
url: images/Rio_Solarpunk.jpg
- text: >-
Solarpunk Santorini with blue-domed houses fitted with crystal roofs, while
keeping the traditional cliffside churches (unchanged), grapevines and
fruiting olive trees cascading across terraces, and massive futuristic on
water wind energy sails [SLRPNK]
output:
url: images/Santorini_Solarpunk.jpg
- text: >-
Solarpunk Varanasi with floating solar lotus platforms spread across the
Ganges River, while keeping the ghats and ancient temples (unchanged),
greenery, flowers, and fruiting trees cascading down the steps, with
bioluminescent lamps powered by algae lining the riverbanks, and futuristic
white riverboats gliding silently past ceremonies on the water [SLRPNK]
output:
url: images/Varanasi_Solarpunk.jpg
base_model: black-forest-labs/FLUX.1-Krea-dev
instance_prompt: '[SLRPNK]'
license: mit
pipeline_tag: text-to-image
language:
- en
---
# flux1 krea dev lora solarpunk
<Gallery />
## Model description
This repository contains the LoRA adapter for FLUX.1-Krea [dev], fine-tuned using https://fal.ai/models/fal-ai/flux-krea-trainer
with curated Solarpunk-style images.
This LoRA excels at creating solarpunk imagintations of real world cities in a dreamy style! I personally feel it performs better than midjourney and any other text-to-image model ๐
The dataset was assembled for the Solarpunk Art Contest 2025 by Yishan, featuring a wide range of environments, architecture, and character scenes inspired by solarpunk aesthetics.
### Prompt Template
You should use the following template (defined when annotating the images with captions) to trigger solarpunk image generation:
"Solarpunk [city or setting] with [distinctive future-tech feature], [architecture or landmark (unchanged if historic)], [greenery and fruiting trees/flowers], [people or activity], [lighting or atmosphere], [additional details]"
## Trigger words
You should use `[SLRPNK]` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/tusharmagar/flux1-krea-dev-lora-solarpunk/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-krea-trainer](https://fal.ai/models/fal-ai/flux-krea-trainer).
|
joanna302/Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05
|
joanna302
| 2025-08-20T14:32:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:53:43Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05/runs/pjfkh85c)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755698744
|
calegpedia
| 2025-08-20T14:31:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:31:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alexrzem/flux-loras
|
alexrzem
| 2025-08-20T14:30:21Z | 598 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-08-09T00:39:22Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/0a8ae8b1-44ef-4272-a7af-f24596444204.webp
text: '-'
- output:
url: images/00a440b5-3663-4706-8c2d-33f738ef2f81.webp
text: '-'
- output:
url: images/0c753105-61c5-4edf-947c-957235363058.webp
text: '-'
- output:
url: images/0ce88242-99b2-45a9-b863-41ed1f5c0480.webp
text: '-'
- output:
url: images/0d63b2d6-a27d-47b7-b052-82c1d9f327f0.webp
text: '-'
- output:
url: images/0da1fcaf-0c6b-4776-90f8-c1a1191dece0.webp
text: '-'
- output:
url: images/1a5de162-9783-40b7-a5e9-e8f94c5feaca.webp
text: '-'
- output:
url: images/1a75f111-4297-48d6-aa9f-90d4b10c8211.webp
text: '-'
- output:
url: images/1b74079e-13eb-49e5-9e3b-7c74a50b47ac.webp
text: '-'
- output:
url: images/1c58fc65-b0a3-4137-9b26-32d03a917a5d.webp
text: '-'
- output:
url: images/1ec96aa2-7787-4f01-8ae1-082881eac464.webp
text: '-'
- output:
url: images/2fe67fce-dbf9-472e-be49-d7e38c45b7db.webp
text: '-'
- output:
url: images/03a45a3b-464b-488f-8868-0c556013b00d.webp
text: '-'
- output:
url: images/3bb050f4-9090-4fe6-be0e-0abab0802d8f.webp
text: '-'
- output:
url: images/3caa3f1d-9179-4d42-865a-2a7e1645d4b5.webp
text: '-'
- output:
url: images/3dd93117-2916-4785-a074-2c585c153f48.webp
text: '-'
- output:
url: images/3e0515d5-6237-4f26-9610-76af80e37967.webp
text: '-'
- output:
url: images/3ecff79a-0a88-433d-9d7c-30b3bb02bffb.webp
text: '-'
- output:
url: images/4b643e1c-ffa3-4386-a158-ef4f43002d16.webp
text: '-'
- output:
url: images/4d0999ca-4236-4f22-b905-f897273b96dd.webp
text: '-'
- output:
url: images/4e8deff4-3c50-40d8-be51-778c61fa5540.webp
text: '-'
- output:
url: images/4e526fa9-c8fc-4206-b4b1-45bb3cb637a1.webp
text: '-'
- output:
url: images/5e0f7179-b0f6-4367-96e3-f9f3c9a87ff2.webp
text: '-'
- output:
url: images/5e5ff0c0-f90c-413f-a138-3635fcf530c5.webp
text: '-'
- output:
url: images/5e94cd8e-82fb-4982-9a65-627c441d0924.webp
text: '-'
- output:
url: images/6d7dd9c2-7c8c-448b-88ba-07694ca3f4dc.webp
text: '-'
- output:
url: images/6d782c54-9b9e-43f4-b5e5-3ea653d77a6d.webp
text: '-'
- output:
url: images/6fea452c-a84a-42d7-bcef-81b195875685.webp
text: '-'
- output:
url: images/7a63adeb-37fa-4356-bf08-2a6db1831116.webp
text: '-'
- output:
url: images/7aaaa16a-bf3a-474a-8e59-50432a8ffb45.webp
text: '-'
- output:
url: images/7c94955b-220f-48b8-9b05-4f6c2ccc9d60.webp
text: '-'
- output:
url: images/7d81f5c4-31ba-427d-a7c5-787dff327186.webp
text: '-'
- output:
url: images/7f2b8270-0f46-4e6e-856d-80321c9bfc13.webp
text: '-'
- output:
url: images/7f296431-81af-4660-b26b-2e8fb1a207fa.webp
text: '-'
- output:
url: images/7fec5fcb-79c2-4ae1-8ba6-078ada9fd718.webp
text: '-'
- output:
url: images/8a63fe63-e096-4beb-b3e0-1fa3e8bbc260.webp
text: '-'
- output:
url: images/8af689f6-fb7f-4725-8e2c-beed918b291f.webp
text: '-'
- output:
url: images/08c58aa8-3769-48ee-94a9-e3e3535ed8ff.webp
text: '-'
- output:
url: images/8c9387d9-adbe-483b-af74-9b9aa3ba8a78.webp
text: '-'
- output:
url: images/8d649609-d2ed-4ed9-860c-04cfddfb7c74.webp
text: '-'
- output:
url: images/9c2ba905-0ef3-4464-96ae-7d830cdf4730.webp
text: '-'
- output:
url: images/9ce45695-3c36-45eb-a13f-3b70fb51b8d1.png
text: '-'
- output:
url: images/12dd781c-c3b8-4339-a4cb-96e7aceeddec.webp
text: '-'
- output:
url: images/22e7de6b-ceee-4778-a6dc-e7bf62bae80b.webp
text: '-'
- output:
url: images/24f25011-d438-4d0a-970d-3ba344dbe29e.webp
text: '-'
- output:
url: images/27cb4ea7-4dde-4d5f-837e-8495dd4b237a.webp
text: '-'
- output:
url: images/028e7509-f726-4e1c-85b3-74e70bd8b663.webp
text: '-'
- output:
url: images/30d44441-b742-4193-8a27-70369dfbd81c.webp
text: '-'
- output:
url: images/34af78eb-5301-4fcc-a8b5-be3a6ef82257.webp
text: '-'
- output:
url: images/37d373d5-e6e7-4ad8-89ca-2583cfe449cb.webp
text: '-'
- output:
url: images/52e20b89-2767-45da-8eb7-1acd2f3545c9.webp
text: '-'
- output:
url: images/52f28514-0826-43d1-9844-15fbc77dbc1d.webp
text: '-'
- output:
url: images/53e68567-4804-4ac7-bfe7-01d792537972.webp
text: '-'
- output:
url: images/57ba0475-2926-45d5-9c44-fd13b6843e6c.webp
text: '-'
- output:
url: images/58ec0783-d965-4ee3-963b-3d67d7d14d57.webp
text: '-'
- output:
url: images/60df9d05-fa90-4964-b9f1-ad3b2f1611ff.webp
text: '-'
- output:
url: images/61bf20f1-798a-4422-961e-90af04e2cc6e.webp
text: '-'
- output:
url: images/64ebb272-30d8-403d-925c-05f60f9701e8.webp
text: '-'
- output:
url: images/66bca8c7-c3e5-488f-a00e-f621f492f747.webp
text: '-'
- output:
url: images/0069e503-57da-49f9-86cd-939a3e699989.webp
text: '-'
- output:
url: images/70cccff3-0dcb-45ef-881b-e1e5918734e5.webp
text: '-'
- output:
url: images/71d3e143-bd49-4878-ae3b-e9fed5932fac.webp
text: '-'
- output:
url: images/73d8220c-8f8e-4f80-8fc5-a32f3b982ba7.webp
text: '-'
- output:
url: images/75df41ef-8d04-4e9e-882b-55829693fc3e.webp
text: '-'
- output:
url: images/77eabc85-c275-456a-9b37-05b8614b716e.webp
text: '-'
- output:
url: images/80f0ddb0-e2e4-4cdc-b81a-9790a9675948.webp
text: '-'
- output:
url: images/081b828e-bb13-4960-ab11-c70534cb04f4.webp
text: '-'
- output:
url: images/81ff570a-4e37-4d71-9197-6ed5ca658956.webp
text: '-'
- output:
url: images/94d1a2b1-0006-4d4d-b057-ccbe3d362adf.webp
text: '-'
- output:
url: images/99d30d81-0ed4-42c1-9f0e-6414e67d0121.webp
text: '-'
- output:
url: images/99f4f075-bd45-4ad0-bf0f-a4ebffc1cbc4.webp
text: '-'
- output:
url: images/121c7745-542a-4816-a8d2-58223610df8d.webp
text: '-'
- output:
url: images/126b07d6-d947-4552-8f04-5bf2d55b5a6a.webp
text: '-'
- output:
url: images/151e2b04-19d0-4a5d-b349-51488ded013e.webp
text: '-'
- output:
url: images/165af86b-da62-4d75-9436-0d3b12b9aaf3.webp
text: '-'
- output:
url: images/203b9a15-0adb-434f-9716-a8b9d57889bc.webp
text: '-'
- output:
url: images/213df832-29f5-4d73-9852-5bea8961fbbb.webp
text: '-'
- output:
url: images/223dee16-c1fd-4c0e-a803-a806b5d68bca.webp
text: '-'
- output:
url: images/304ad79d-6d47-4501-8622-ea77605d052c.webp
text: '-'
- output:
url: images/311c734f-6ba6-412e-9bfb-c08bd754f759.webp
text: '-'
- output:
url: images/350f0761-40e3-425c-a67e-9dd12b7aaa87.webp
text: '-'
- output:
url: images/422c54c5-a827-4b61-b86f-b53f26fef74d.webp
text: '-'
- output:
url: images/485d7181-e2f6-4bfd-b557-f8a80980d012.webp
text: '-'
- output:
url: images/488b9057-be9d-4ded-9da6-7fb1fb71c01d.webp
text: '-'
- output:
url: images/600a7a80-1b6b-4f5d-b83f-d7e134d48561.webp
text: '-'
- output:
url: images/622dd267-c69d-45bc-8351-3643265df64d.webp
text: '-'
- output:
url: images/641ff66e-6f9d-48f0-9418-f2b937ded86c.webp
text: '-'
- output:
url: images/655a3845-ed24-48c9-a77d-e71dfc5394ac.webp
text: '-'
- output:
url: images/748cddda-e1d5-45fb-8941-70ac5e4ac21a.webp
text: '-'
- output:
url: images/780d151e-4ae1-41d9-936f-804d2c8ccbaf.webp
text: '-'
- output:
url: images/786f7a4d-2411-4f8b-946d-1bbe35494400.webp
text: '-'
- output:
url: images/823a4519-9079-4f79-bc95-57ce43e5b6df.webp
text: '-'
- output:
url: images/846fc232-9bc7-4f34-9bab-60be5caeafe8.webp
text: '-'
- output:
url: images/935dca03-f080-44cf-a252-cd77c3b16961.webp
text: '-'
- output:
url: images/2050c213-608a-44ed-80bd-1b8ec4384882.webp
text: '-'
- output:
url: images/3043db21-fc16-4c51-b2b7-b0c24b1151b2.webp
text: '-'
- output:
url: images/3486f005-0202-4f79-b948-a57d9e557166.webp
text: '-'
- output:
url: images/3775bf08-ba06-41a3-bff2-cca408591106.webp
text: '-'
- output:
url: images/3857aef6-1935-4f55-afec-d652fcbfded8.webp
text: '-'
- output:
url: images/3911fc09-26b2-45e8-a5ba-110b6b6cc6a8.webp
text: '-'
- output:
url: images/5188ac1c-195f-41f7-a9d1-aaf53d7a2033.webp
text: '-'
- output:
url: images/6128a914-b292-40d4-97c9-2586cb637a1c.webp
text: '-'
- output:
url: images/6178ae17-119a-403a-bab0-f236e67025d9.webp
text: '-'
- output:
url: images/6820cba8-93e8-491b-8afa-d94d2072cd7e.webp
text: '-'
- output:
url: images/7412f1ed-8dce-48af-af40-b3993a5c1cba.webp
text: '-'
- output:
url: images/9550a825-c5e2-40ff-ae1f-86b113be53dc.webp
text: '-'
- output:
url: images/43810b80-442a-45a4-a6e2-aa08795169c9.webp
text: '-'
- output:
url: images/48785a69-8796-4d2d-aaef-c2cb0c35ae92.webp
text: '-'
- output:
url: images/64517d92-b531-4649-91e3-87941073dab1.webp
text: '-'
- output:
url: images/82164d7e-42e4-493f-9b39-269077f2df35.webp
text: '-'
- output:
url: images/87189a64-b9d1-4e07-895c-e36db01283bf.webp
text: '-'
- output:
url: images/91602b93-38b4-4dd0-878f-23250b55f9cc.webp
text: '-'
- output:
url: images/772416d4-10ae-4ea6-b633-dd9f074894b6.webp
text: '-'
- output:
url: images/2491729d-37ff-4820-bafe-f98f4f49d53e.webp
text: '-'
- output:
url: images/5638741c-9879-4924-99e6-daeef82c6dcf.webp
text: '-'
- output:
url: images/5727238c-f88e-44f3-b7f3-b7cdf6f4bf2b.webp
text: '-'
- output:
url: images/6498163e-4722-4d39-8d61-e86d114bfc6f.webp
text: '-'
- output:
url: images/8067004e-34c6-48cd-b761-fc5592f82237.webp
text: '-'
- output:
url: images/19626983-4f78-4d28-900b-63a848fa8832.webp
text: '-'
- output:
url: images/21345420-ead1-4f4a-a951-9270e2c64c1b.webp
text: '-'
- output:
url: images/56968188-2cdd-47b7-8ae9-04b01fa4722f.webp
text: '-'
- output:
url: images/61898152-8223-441b-9068-4c11e18de347.webp
text: '-'
- output:
url: images/62818151-59ff-4aa9-87ab-c54f80d3fa67.webp
text: '-'
- output:
url: images/91636543-f156-4f5f-9d5c-0f62723451d0.webp
text: '-'
- output:
url: images/a4a80220-bd35-4c7c-9b54-e95a9777b512.webp
text: '-'
- output:
url: images/a4e8d0d9-16c5-4f6f-84f6-e4060a6192de.webp
text: '-'
- output:
url: images/a55f22fc-8be4-4603-a62f-74b6ccfbf40a.webp
text: '-'
- output:
url: images/a244f2be-a977-4172-ad8d-c2b46ebc26af.webp
text: '-'
- output:
url: images/a46169ef-b95d-4eda-92fb-bcb6e4a516d9.webp
text: '-'
- output:
url: images/aa209f48-47dc-49b5-a126-b5daf1cf7a59.webp
text: '-'
- output:
url: images/ab506a5d-b96e-4a99-bef3-95c8ee7fbdbc.webp
text: '-'
- output:
url: images/ac7aaf25-ca0d-4e93-99a5-7750076076c0.webp
text: '-'
- output:
url: images/aec5e716-ca6b-4f71-b87d-e5e5fb0ea9ec.webp
text: '-'
- output:
url: images/b1dc3d49-91ef-4392-8504-f6a6946b4283.webp
text: '-'
- output:
url: images/b5a3421b-a029-4a97-b126-7e8066f55c66.webp
text: '-'
- output:
url: images/b5ea2bf6-3129-4341-8d52-7e7ed9e00c0e.webp
text: '-'
- output:
url: images/b7d0581a-dcb6-4c87-833d-a7b73305e514.webp
text: '-'
- output:
url: images/b82f8efc-6030-4987-be08-06373ac36c38.webp
text: '-'
- output:
url: images/b85d8783-beb1-400b-97fb-3cb2acfde38f.webp
text: '-'
- output:
url: images/b472fc5d-3bac-40ee-b279-f79bda89893d.webp
text: '-'
- output:
url: images/ba2af392-c129-46a8-8e07-5d670f764b3f.webp
text: '-'
- output:
url: images/bce1b94b-b10b-4fcb-8ab1-8516d32516c2.webp
text: '-'
- output:
url: images/bd7c8c57-8835-4610-8ce9-ec936bba1355.webp
text: '-'
- output:
url: images/be3f60b8-ddce-4d12-b85f-f621b04557f1.webp
text: '-'
- output:
url: images/bf2271c0-7a20-411a-a052-b8c80c14eb95.webp
text: '-'
- output:
url: images/c2a0c0b3-3fbb-4b49-b66f-d55eb3e4b45c.webp
text: '-'
- output:
url: images/c2d7be96-bb47-4666-b591-671c9c66e5f2.webp
text: '-'
- output:
url: images/c97b9f4b-2548-48c2-9b0b-22455b01886a.webp
text: '-'
- output:
url: images/c103b8e2-96a1-4415-9c05-640bd31075c4.webp
text: '-'
- output:
url: images/c438b553-4a2c-4b49-a938-83d6a47c051f.webp
text: '-'
- output:
url: images/c7589b06-2e84-4be1-9b61-5390ee41658c.webp
text: '-'
- output:
url: images/c8387a40-d29c-4b49-904d-a57da47c57e3.webp
text: '-'
- output:
url: images/c8989af4-6f40-4f27-a767-094d04d9847c.webp
text: '-'
- output:
url: images/ccbc75af-1bcb-4785-93f5-48dbf035de40.webp
text: '-'
- output:
url: images/cd35b610-2173-4c90-99da-0894042650c5.webp
text: '-'
- output:
url: images/d3d4a096-8e55-44a0-8219-9bc4611c7d9c.webp
text: '-'
- output:
url: images/d5c6f3cf-e891-4622-8ae2-c81bc55b5fa5.webp
text: '-'
- output:
url: images/d8dcc5e7-0202-417a-9379-2e80f646091f.webp
text: '-'
- output:
url: images/d9b91cbe-b5fe-443a-b6e8-60ab38588253.webp
text: '-'
- output:
url: images/d17a3fea-685f-4b2b-a747-87481598e1b8.webp
text: '-'
- output:
url: images/d88babe9-c7ce-41e9-b59a-efcc56e6693b.webp
text: '-'
- output:
url: images/d39432f2-6c31-4a65-abdb-7867d5953d8f.webp
text: '-'
- output:
url: images/dcee51b5-0c6b-4a88-9673-6b1ac9550408.webp
text: '-'
- output:
url: images/dd8e0979-4cd1-4545-8a1b-b51f247e56be.webp
text: '-'
- output:
url: images/dd67a021-1c50-45e4-ac78-48cd53c44e79.webp
text: '-'
- output:
url: images/dd955d85-1bfd-4369-9e21-24d014dfded8.webp
text: '-'
- output:
url: images/dd7281c5-1901-41a2-9665-d56162a883d1.webp
text: '-'
- output:
url: images/e0ad4f8b-d747-4292-adb5-8567ff6637e0.webp
text: '-'
- output:
url: images/e05d7af5-dcf3-4433-9333-67cfd924799b.webp
text: '-'
- output:
url: images/e8f9dc7c-44cb-46f7-a5f2-2a644b8e6656.webp
text: '-'
- output:
url: images/e9823cdc-a212-4e39-8718-fdfd87174a90.webp
text: '-'
- output:
url: images/ea321a7c-b875-4c6c-9f66-59b12dd325c3.webp
text: '-'
- output:
url: images/eb5f1902-a562-440c-afcc-16e644a40473.webp
text: '-'
- output:
url: images/eba94178-3446-4403-bf5b-4818897eeac0.webp
text: '-'
- output:
url: images/ebbd1d12-b70b-44c0-b2e3-c9a889b1b6ba.webp
text: '-'
- output:
url: images/ec1ba03b-2904-4016-9df9-9c33382447ff.webp
text: '-'
- output:
url: images/ec2ff0ee-34cb-4d01-b7f7-0b4116658a6b.webp
text: '-'
- output:
url: images/ec28efeb-cfea-4b20-bd38-9c0dbe44f714.webp
text: '-'
- output:
url: images/ed3a38ce-6843-44f2-bf99-7229c1068bfc.webp
text: '-'
- output:
url: images/ee3c020e-38e0-4126-a1d1-4f76f6596938.webp
text: '-'
- output:
url: images/eea326a8-9138-4f40-aa6c-b14f130b2710.webp
text: '-'
- output:
url: images/efa5479a-d965-4dc7-9b60-1064256fccb3.webp
text: '-'
- output:
url: images/f6c3b6cf-0a1a-4220-a7e7-54beed1a19c5.webp
text: '-'
- output:
url: images/f6c42a04-9573-473c-b732-63f095ee0732.webp
text: '-'
- output:
url: images/f0262576-52f2-44a1-b188-0ce3630c49f6.webp
text: '-'
- output:
url: images/f9737306-84a7-44a8-b721-214d89904605.webp
text: '-'
- output:
url: images/fac59c5b-9d61-47b8-82f5-3b4b6e53078f.webp
text: '-'
- output:
url: images/fbb33a7d-8aa2-4065-9a1e-b6d6b5a0a49e.webp
text: '-'
- output:
url: images/fd259e55-ece8-410b-8b45-df3f03875ecf.webp
text: '-'
- output:
url: images/fd9913cd-6426-4fcc-a250-2dfd69e8cfa4.webp
text: '-'
- output:
url: images/234878b7-c0ea-427b-9e95-17290e7bed7d.webp
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: mit
---
# flux-loras
<Gallery />
## Model description
Personal Use FLUX.1 [dev] LoRAs
## Download model
[Download](/alexrzem/flux-loras/tree/main) them in the Files & versions tab.
|
youuotty/blockassist-bc-furry_reptilian_flamingo_1755700198
|
youuotty
| 2025-08-20T14:30:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry reptilian flamingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:29:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry reptilian flamingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
phospho-app/Stormholman-gr00t-bluetoothbox-h2rkx
|
phospho-app
| 2025-08-20T14:29:27Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"gr00t_n1_5",
"gr00t",
"robotics",
"dataset:Stormholman/bluetoothbox",
"region:us"
] |
robotics
| 2025-08-20T14:07:03Z |
---
datasets: Stormholman/bluetoothbox
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [Stormholman/bluetoothbox](https://huggingface.co/datasets/Stormholman/bluetoothbox)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
sai9390/age_predictor2
|
sai9390
| 2025-08-20T14:29:11Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T14:29:11Z |
---
license: apache-2.0
---
|
pobiiiiiii/blockassist-bc-ravenous_yapping_ferret_1755700099
|
pobiiiiiii
| 2025-08-20T14:29:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous yapping ferret",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:28:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous yapping ferret
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sdagsadgd/blockassist-bc-sedate_squeaky_salamander_1755696899
|
sdagsadgd
| 2025-08-20T14:29:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate squeaky salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:28:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate squeaky salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
razor534/blockassist-bc-lazy_extinct_termite_1755700056
|
razor534
| 2025-08-20T14:28:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy extinct termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:28:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy extinct termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755698495
|
koloni
| 2025-08-20T14:28:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:28:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hawierdev/shellm-v0.1
|
hawierdev
| 2025-08-20T14:27:16Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-20T08:26:54Z |
# ShellM v0.1
ShellM is a specialized language model based on Qwen2.5 1.5B, fine-tuned for generating intelligent terminal command suggestions. The model has been trained to understand terminal context and suggest appropriate commands.
## Model Specifications
- **Architecture**: Qwen2ForCausalLM
- **Size**: 1.5B parameters
- **Hidden size**: 1536
- **Layers**: 28
- **Attention heads**: 12 (2 key-value heads)
- **Max context length**: 32768 tokens
- **Precision**: bfloat16
## Capabilities
- Terminal command suggestions based on context
- Understanding complex system tasks
- Shell script generation
- Command debugging assistance
- Multi-platform support
## Usage
The model uses chatml format for conversations:
```
<|im_start|>user
How to display all processes using port 8080?
<|im_end|>
<|im_start|>assistant
lsof -i :8080
<|im_end|>
```
### Example Use Cases
**File Management:**
```
User: How to find all Python files larger than 1MB?
Assistant: find . -name "*.py" -size +1M
```
**Process Management:**
```
User: How to kill all python processes?
Assistant: pkill python
```
**Git Operations:**
```
User: How to undo last commit without losing changes?
Assistant: git reset --soft HEAD~1
```
## Installation and Usage
Requirements:
- transformers
- torch
- tokenizers
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "shellm_v0.1_merged"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "<|im_start|>user\nHow to check disk usage?<|im_end|>\n<|im_start|>assistant\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=150, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=False)
```
## Special Tokens
The model supports standard Qwen2 tokens:
- `<|im_start|>`, `<|im_end|>` - conversation markers
- `<|vision_pad|>` - padding token
- Fill-in-the-middle tokens: `<|fim_prefix|>`, `<|fim_middle|>`, `<|fim_suffix|>`
## Version Info
Version: v0.1
Based on: Qwen2.5-1.5B
Fine-tuned with: Unsloth v2025.8.8
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755698286
|
katanyasekolah
| 2025-08-20T14:27:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:27:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Voxtral-Small-24B-2507-i1-GGUF
|
mradermacher
| 2025-08-20T14:26:57Z | 474 | 0 |
transformers
|
[
"transformers",
"gguf",
"vllm",
"en",
"fr",
"de",
"es",
"it",
"pt",
"nl",
"hi",
"base_model:mistralai/Voxtral-Small-24B-2507",
"base_model:quantized:mistralai/Voxtral-Small-24B-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-07-29T12:07:42Z |
---
base_model: mistralai/Voxtral-Small-24B-2507
language:
- en
- fr
- de
- es
- it
- pt
- nl
- hi
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- vllm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/mistralai/Voxtral-Small-24B-2507
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Voxtral-Small-24B-2507-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Small-24B-2507-i1-GGUF/resolve/main/Voxtral-Small-24B-2507.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF
|
mradermacher
| 2025-08-20T14:25:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:KaraKaraWitch/BiggerCoQ-Qwen3-10b",
"base_model:quantized:KaraKaraWitch/BiggerCoQ-Qwen3-10b",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-20T07:51:26Z |
---
base_model: KaraKaraWitch/BiggerCoQ-Qwen3-10b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/KaraKaraWitch/BiggerCoQ-Qwen3-10b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#BiggerCoQ-Qwen3-10b-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ1_M.gguf) | i1-IQ1_M | 3.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ2_S.gguf) | i1-IQ2_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ2_M.gguf) | i1-IQ2_M | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q2_K.gguf) | i1-Q2_K | 4.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ3_S.gguf) | i1-IQ3_S | 5.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ3_M.gguf) | i1-IQ3_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q4_0.gguf) | i1-Q4_0 | 6.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 6.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q4_1.gguf) | i1-Q4_1 | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.i1-Q6_K.gguf) | i1-Q6_K | 9.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Behzadshomali/16_08_20
|
Behzadshomali
| 2025-08-20T14:23:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Behzadshomali/Teuken3.7B",
"base_model:finetune:Behzadshomali/Teuken3.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T14:08:55Z |
---
base_model: Behzadshomali/Teuken3.7B
library_name: transformers
model_name: '16_08_20'
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 16_08_20
This model is a fine-tuned version of [Behzadshomali/Teuken3.7B](https://huggingface.co/Behzadshomali/Teuken3.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Behzadshomali/16_08_20", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/behzadshomali/Teuken3.73T_IT_grade-school-math/runs/i9amv9ig)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
aivoryinnovations/jay
|
aivoryinnovations
| 2025-08-20T14:21:41Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-20T13:23:08Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
finneganrainier/vit-detector
|
finneganrainier
| 2025-08-20T14:21:27Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T14:15:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kelasbgd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vocal_scurrying_tarantula
|
kelasbgd
| 2025-08-20T14:20:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vocal_scurrying_tarantula",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T13:03:00Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vocal_scurrying_tarantula
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aidan-ucc/LoRA-qwen2.5VL-3B-5200
|
aidan-ucc
| 2025-08-20T14:20:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-20T14:17:00Z |
---
base_model: unsloth/Qwen2.5-VL-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** aidan-ucc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755697707
|
manusiaperahu2012
| 2025-08-20T14:18:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:18:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755697900
|
helmutsukocok
| 2025-08-20T14:18:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:18:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755697755
|
chainway9
| 2025-08-20T14:17:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:17:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755698343
|
Sayemahsjn
| 2025-08-20T14:17:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:16:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qing223101/blockassist-bc-bellowing_shrewd_tiger_1755697862
|
qing223101
| 2025-08-20T14:16:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing shrewd tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:16:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing shrewd tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sm00thix/unet
|
Sm00thix
| 2025-08-20T14:16:26Z | 0 | 1 |
pytorch
|
[
"pytorch",
"computer-vision",
"image-segmentation",
"unet",
"u-net",
"medical-imaging",
"semantic-segmentation",
"arxiv:2504.14131",
"arxiv:1505.04597",
"arxiv:1502.03167",
"arxiv:1607.06450",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2025-08-20T12:12:07Z |
---
license: apache-2.0
tags:
- computer-vision
- image-segmentation
- pytorch
- unet
- u-net
- medical-imaging
- semantic-segmentation
library_name: pytorch
pipeline_tag: image-segmentation
---
# U-Net
This repository contains an implementation of U-Net [[1]](#references). [unet.py](./unet.py) implements the class UNet. The implementation has been tested with PyTorch 2.7.1 and CUDA 12.6.

You can load the U-Net from PyTorch Hub.
```python
import torch
# These are the default parameters. They are written out for clarity. Currently no pretrained weights are available.
model = torch.hub.load('sm00thix/unet', 'unet', pretrained=False, in_channels=3, out_channels=1, pad=True, bilinear=True, normalization=None)
# or
# model = torch.hub.load('sm00thix/unet', 'unet_bn', **kwargs) # Convenience function equivalent to torch.hub.load('sm00thix/unet', 'unet', normalization='bn', **kwargs)
# or
# model = torch.hub.load('sm00thix/unet', 'unet_ln', **kwargs) # Convenience function equivalent to torch.hub.load('sm00thix/unet', 'unet', normalization='ln', **kwargs)
# or
# model = torch.hub.load('sm00thix/unet', 'unet_medical', **kwargs) # Convenience function equivalent to torch.hub.load('sm00thix/unet', 'unet', in_channels=1, out_channels=1, **kwargs)
# or
# model = torch.hub.load('sm00thix/unet', 'unet_transconv', **kwargs) # Convenience function equivalent to torch.hub.load('sm00thix/unet', 'unet', bilinear=False, **kwargs)
```
You can also clone this repository to access the U-Net directly.
```python
import torch
from unet import UNet
model = UNet(in_channels=3, out_channels=1, pad=True, bilinear=True, normalization=None)
```
## Options
The UNet class provides the following options for customization.
1. Number of input and output channels
`in_channels` is the number of channels in the input image.
`out_channels` is the number of channels in the output image.
2. Upsampling
1. `bilinear = False`: Transposed convolution with a 2x2 kernel applied with stride 2. This is followed by a ReLU.
2. `bilinear = True`: Factor 2 bilinear upsampling followed by convolution with a 1x1 kernel applied with stride 1.
3. Padding
1. `pad = True`: The input size is retained in the output by zero-padding convolutions and, if necessary, the results of the upsampling operations.
2. `pad = False`: The output is smaller than the input as in the original implementation. In this case, every 3x3 convolution layer reduces the height and width by 2 pixels each. Consequently, the right side of the U-Net has a smaller spatial size than the left size. Therefore, before concatenating, the central slice of the left tensor is cropped along the spatial dimensions to match those of the right tensor.
4. Normalization following the ReLU which follows each convolution and transposed convolution.
1. `normalization = None`: Applies no normalization.
2. `normalization = "bn"`: Applies batch normalization [[2]](#references).
3. `normalization = "ln"`: Applies layer normalization [[3]](#references). A permutation of dimensions is performed before the layer to ensure normalization is applied over the channel dimension. Afterward, the dimensions are permuted back to their original order.
In particular, setting bilinear = False, pad = False, and normalization = None will yield the U-Net as originally designed. Generally, however, bilinear = True is recommended to avoid checkerboard artifacts.
As in the original implementation, all weights are initialized by sampling from a Kaiming He Normal Distribution [[4]](#references), and all biases are initialized to zero. If Batch Normalization or Layer Normalization is used, the weights of those layers are initialized to one and their biases to zero.
If you use this U-Net implementation, please cite Engstrรธm et al. [[5]](#references) who developed this implementation as part of their work on chemical map geenration of fat content in images of pork bellies.
## Citation
If you use the code shared in this repository, please cite this work: https://arxiv.org/abs/2504.14131. The U-Net implementation in this repository was used to generate pixel-wise fat predictions in an image of a pork belly.

## References
1. [O. Ronneberger, P. Fischer, and Thomas Brox (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. *MICCAI 2015*.](https://arxiv.org/abs/1505.04597)
2. [S. Ioffe and C. Szegedy (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. *ICML 2015*.](https://arxiv.org/abs/1502.03167)
3. [J. L. Ba and J. R. Kiros and G. E. Hinton (2016). Layer Normalization.](https://arxiv.org/abs/1607.06450)
4. [K. He and X. Zhang and S. Ren and J. Sun (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.](https://openaccess.thecvf.com/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html)
5. [O.-C. G. Engstrรธm and M. Albano-Gaglio and E. S. Dreier and Y. Bouzembrak and M. Font-i-Furnols and P. Mishra and K. S. Pedersen (2025). Transforming Hyperspectral Images Into Chemical Maps: A Novel End-to-End Deep Learning Approach.](https://arxiv.org/abs/2504.14131)
## Funding
This work has been carried out as part of an industrial Ph. D. project receiving funding from [FOSS Analytical A/S](https://www.fossanalytics.com/) and [The Innovation Fund Denmark](https://innovationsfonden.dk/en). Grant number 1044-00108B.
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755699346
|
lilTAT
| 2025-08-20T14:16:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:16:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755699146
|
lqpl
| 2025-08-20T14:14:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:13:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755699181
|
Vasya777
| 2025-08-20T14:14:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:13:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755697592
|
indoempatnol
| 2025-08-20T14:13:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:13:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stephenoptins/tracy_moore_2
|
stephenoptins
| 2025-08-20T14:13:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T13:35:13Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Tracy
---
# Tracy_Moore_2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Tracy` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Tracy",
"lora_weights": "https://huggingface.co/stephenoptins/tracy_moore_2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('stephenoptins/tracy_moore_2', weight_name='lora.safetensors')
image = pipeline('Tracy').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3302
- Learning rate: 0.0004
- LoRA rank: 48
## Contribute your own examples
You can use the [community tab](https://huggingface.co/stephenoptins/tracy_moore_2/discussions) to add images that show off what youโve made with this LoRA.
|
jglowa/prosty-rag
|
jglowa
| 2025-08-20T14:12:46Z | 2 | 4 | null |
[
"llamafile",
"rag",
"text-generation",
"pl",
"base_model:speakleash/Bielik-4.5B-v3.0-Instruct",
"base_model:finetune:speakleash/Bielik-4.5B-v3.0-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-15T03:48:57Z |
---
license: apache-2.0
language:
- pl
base_model:
- speakleash/Bielik-4.5B-v3.0-Instruct
pipeline_tag: text-generation
tags:
- rag
---
# Prosty RAG
Prosty RAG to otwartoลบrรณdลowy asystent AI oparty na polskim modelu jฤzykowym [Bielik-4.5B-v3.0-Instruct](https://huggingface.co/speakleash/Bielik-4.5B-v3.0-Instruct), ktรณry odpowiada na pytania z prywatnej bazy wiedzy uลผytkownika, wykorzystujฤ
c technikฤ RAG (Retrieval-Augmented Generation). **Asystent dziaลa w peลni lokalnie**, jako dwa pliki wykonywalne na Windows/Linux/MacOS, wykorzystujฤ
ce technologiฤ [llamafile](https://llamafile.ai/) i embedfile. Aplikacja jest przenoลna, nie wymaga ลrodowiska Python z mnรณstwem pakietรณw (np. LangChain, LangGraph, LlamaIndex i podobne), automatycznie wykrywa zainstalowane biblioteki GPU (CUDA/ROCm), a w przypadku ich braku wykorzystuje CPU.
Zasada dziaลania:
1. Umieszczamy pliki bazy wiedzy PDF, TXT i MD (Markdown) w folderze `baza`,
2. Pliki sฤ
indeksowane (PDF konwertowane na TXT za pomocฤ
[pdftotext](https://www.xpdfreader.com/pdftotext-man.html)), dzielone na fragmenty i osadzane w wektorowej bazie danych [sqlite-vec](https://github.com/asg017/sqlite-vec),
3. Dla danego zapytania pobierane sฤ
najbardziej trafne fragmenty z bazy danych, ktรณre uzupeลniajฤ
kontekst pytania,
4. Model jฤzykowy generuje odpowiedลบ na pytanie wykorzystujฤ
c wzbogacone dane z bazy wiedzy.
### Uruchamianie
Wystarczy pobraฤ plik [**prosty-rag.cmd**](https://huggingface.co/jglowa/prosty-rag/resolve/main/prosty-rag.cmd?download=true) (klikajฤ
c prawym przyciskiem -> zapisz link jako...) i uruchomiฤ go (klikajฤ
c dwukrotnie myszฤ
lub wpisujฤ
c w wierszu poleceล `./prosty-rag.cmd`). Skrypt sam pobierze pliki: `prosty-rag.llamafile` i `bge-m3.embedfile` (jeลli nie zostaลy wczeลniej pobrane), uruchomi indeksator (jeลli nie zostaล jeszcze uruchomiony), zaลaduje serwer z modelem osadzania (embedfile), z modelem jฤzykowym (llamafile) i otworzy stronฤ [http://localhost:8080](http://localhost:8080) w przeglฤ
darce internetowej. Asystent dziaลa off-line, a wszelkie dane pozostajฤ
lokalnie na urzฤ
dzeniu.
W folderze `baza` naleลผy umieลciฤ wszystkie pliki PDF, TXT i MD do stworzenia bazy wiedzy. Nastฤpnie naleลผy uruchomiฤ skrypt `indeksator.cmd`, ktรณry skonwertuje pliki PDF do TXT i zaindeksuje pliki tesktowe w wektorowej bazie danych SQLite `prosty-rag.db`, korzystajฤ
c z modelu osadzania `bge-m3.embedfile`. Indeksator naleลผy uruchomiฤ po kaลผdej zmianie plikรณw w folderze `baza`.
Aby zadawaฤ pytania dotyczฤ
ce zaindeksowanej bazy wiedzy, naleลผy uruchomiฤ skrypt `prosty-rag.cmd` i wpisaฤ pytanie. Najbardziej trafne fragmenty zostanฤ
wyszukane w bazie danych `prosty-rag.db`, a nastฤpnie zostanie zaลadowany model jฤzykowy `prosty-rag.llamafile` i uruchomiony czat z wypeลnionym pytaniem uลผytkownika w przeglฤ
darce internetowej. Wystarczy poczekaฤ na odpowiedลบ.
### Budowanie
Aby zbudowaฤ wลasnฤ
wersjฤ asystenta AI, naleลผy ลciฤ
gnฤ
ฤ pliki: `build.cmd`, `.args` oraz `www/chatbot.js`, ewentualnie zmieniฤ model GGUF w pliku `build.cmd`. Na koniec uruchomiฤ skrypt `build.cmd`. Po udanym zbudowaniu powinien pojawiฤ siฤ nowy plik `prosty-rag.llamafile`.
### Podglฤ
d

|
mradermacher/Cerium-Qwen3-R1-Dev-GGUF
|
mradermacher
| 2025-08-20T14:12:38Z | 1,160 | 0 |
transformers
|
[
"transformers",
"gguf",
"gspo",
"text-generation-inference",
"code",
"math",
"trl",
"science",
"moe",
"en",
"base_model:prithivMLmods/Cerium-Qwen3-R1-Dev",
"base_model:quantized:prithivMLmods/Cerium-Qwen3-R1-Dev",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-10T10:18:19Z |
---
base_model: prithivMLmods/Cerium-Qwen3-R1-Dev
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- gspo
- text-generation-inference
- code
- math
- trl
- science
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/prithivMLmods/Cerium-Qwen3-R1-Dev
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Cerium-Qwen3-R1-Dev-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Cerium-Qwen3-R1-Dev-GGUF/resolve/main/Cerium-Qwen3-R1-Dev.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
iikkmn/pythia-70m-Gensyn-Swarm-skittish_nocturnal_otter
|
iikkmn
| 2025-08-20T14:12:12Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am skittish_nocturnal_otter",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T16:43:14Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am skittish_nocturnal_otter
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ibm-granite/granite-geospatial-wxc-downscaling
|
ibm-granite
| 2025-08-20T14:11:51Z | 57 | 34 | null |
[
"pytorch",
"license:cdla-permissive-2.0",
"region:us"
] | null | 2024-09-20T18:41:49Z |
---
license: cdla-permissive-2.0
---
# Model card for granite-geospatial-wxc-downscaling
<!-- [<b><i>>>Try it on Colab<<</i></b> (Please select the T4 GPU runtime)](https://colab.research.google.com/github/IBM/granite-wxc/blob/main/examples/granitewxc_downscaling/notebooks/granitewxc_downscaling_inference.ipynb) -->
`granite-geospatial-wxc-downscaling` is a fine-tuned foundation model for the downscaling of weather and climate data. It is based on the [Prithvi WxC foundation model](https://huggingface.co/collections/ibm-nasa-geospatial/prithvi-for-weather-and-climate-6740a9252d5278b1c75b3418). `granite-geospatial-downscaling` has been used to downscale both MERRA-2 data, ECCC data as well as EURO-CORDEX climate simulations. The weights for the former are included here.
<b>6x downscaling of MERRA-2 2m temperature</b>
<center><img src="downscaling_T2M_coolwarm_animated.gif" alt="Downscaling of MERRA-2 T2M" width=462></center>
<b>8x downscaling of ECCC's u10 wind component</b>
<center><img src="downscaling_eccc_u10.png" alt="Downscaling of ECCC's u10 Wind Component" width=462></center>
More information: [Code](https://github.com/IBM/granite-wxc), [base model](https://huggingface.co/collections/ibm-nasa-geospatial/prithvi-for-weather-and-climate-6740a9252d5278b1c75b3418), paper (to appear).
## Architecture
From an architecture point of view, we embed Prithvi WxC's transformer layers into a series of convolutional layers. That is, we typically increase resolution before and after the pre-trained transformer stages.
## Data - MERRA-2
As a reference and baseline how to use Prithvi WxC as well as the downscaling architecture, we have used `granite-geospatial-downscaling` for 6x downscaling of MERRA-2 2m temperature data. That is, we take MERRA-2 data of 0.5 x 0.625 degrees resolution, coarsen it by a factor of six along each axis and then apply an additional smoothing filter via a 3x3 convolution. Subsequently we fine-tune the above architecture to recover the high resolution data. The weights for this are included here.
## Data - ECCC (Environment and Climate Change Canada)
We use Prithvi WxC for the downscaling task on Canadaโs operational Numerical Weather Prediction (NWP) systems. Specifically, the goal is to downscale forecasts from the Global Deterministic Prediction System (GDPS)โwhich provides 10-day forecasts at ~15 km resolutionโto the High-Resolution Deterministic Prediction System (HRDPS), which produces 48-hour forecasts at ~2.5 km resolution. The weights for this are included here.
## Further applications - EURO-CORDEX
In addition, we have used the same architecture with different hyperparameter choices for a 12x downscaling of a subset of EURO-CORDEX climate simulation.
|
mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF
|
mradermacher
| 2025-08-20T14:10:42Z | 2,667 | 0 |
transformers
|
[
"transformers",
"gguf",
"internvl",
"custom_code",
"abliterated",
"uncensored",
"multilingual",
"base_model:huihui-ai/Huihui-InternVL3-78B-abliterated",
"base_model:quantized:huihui-ai/Huihui-InternVL3-78B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-12T15:17:11Z |
---
base_model: huihui-ai/Huihui-InternVL3-78B-abliterated
language:
- multilingual
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
license_name: qwen
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- internvl
- custom_code
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/huihui-ai/Huihui-InternVL3-78B-abliterated
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Huihui-InternVL3-78B-abliterated-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 29.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 45.8 | |
| [GGUF](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-i1-GGUF/resolve/main/Huihui-InternVL3-78B-abliterated.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jo-mengr/mmcontext-pubmedbert-geneformer-100k_adapter
|
jo-mengr
| 2025-08-20T14:10:40Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:81143",
"loss:MultipleNegativesRankingLoss",
"code",
"dataset:jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:NeuML/pubmedbert-base-embeddings",
"base_model:finetune:NeuML/pubmedbert-base-embeddings",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-20T14:10:22Z |
---
language:
- code
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:81143
- loss:MultipleNegativesRankingLoss
base_model: NeuML/pubmedbert-base-embeddings
widget:
- source_sentence: sample_idx:census_d7d7e89c-c93a-422d-8958-9b4a90b69558_1563
sentences:
- This measurement was conducted with 10x 5' v1. Naive B cell from blood of a 26-year
old male, activated with CD3.
- sample_idx:census_d7d7e89c-c93a-422d-8958-9b4a90b69558_5036
- This measurement was conducted with 10x 5' v1. A 26-year-old male individual's
blood sample, containing naive thymus-derived CD4-positive, alpha-beta T cells,
with no activation or treatment, and in G1 phase.
- source_sentence: sample_idx:census_cf83c98a-3791-4537-bbde-a719f6d73c13_738
sentences:
- This measurement was conducted with 10x 3' v3. Blasts cells derived from the blood
of a 4-month old male.
- sample_idx:census_cf83c98a-3791-4537-bbde-a719f6d73c13_1016
- This measurement was conducted with 10x 3' v3. This is a megakaryocyte-erythroid
progenitor cell (MEP-like) derived from a 1-month-old female patient with KMT2A-rearranged
(KMT2A-r) infant acute lymphoblastic leukemia (ALL). The cell exhibits increased
lineage plasticity, downregulated steroid response pathways, and belongs to a
hematopoietic stem and progenitor-like (HSPC-like) population that forms an immunosuppressive
signaling circuit with cytotoxic lymphocytes.
- source_sentence: sample_idx:census_2872f4b0-b171-46e2-abc6-befcf6de6306_2050
sentences:
- sample_idx:census_2872f4b0-b171-46e2-abc6-befcf6de6306_1719
- This measurement was conducted with 10x 5' v2. Memory B cell derived from a 65-79
year-old male, taken from the mesenteric lymph node.
- This measurement was conducted with 10x 5' v2. IgA plasma cell sample taken from
the mesenteric lymph node of a 65-79 year-old female.
- source_sentence: sample_idx:census_3f31f8ce-bbf6-4df8-8203-aa240ed03026_299
sentences:
- This measurement was conducted with 10x 3' v3. Neuron cell type from a 50-year-old
male human cerebral cortex, specifically from the Cingulate gyrus, rostral (CgGr),
Ventral division of MFC - A24 region, with European self-reported ethnicity, analyzed
at the nucleus level.
- This measurement was conducted with 10x 3' v3. Neuron cell type from a 50-year-old
male human cerebral cortex, specifically the rostral cingulate gyrus, ventral
division of MFC, A24, with European ethnicity.
- sample_idx:census_3f31f8ce-bbf6-4df8-8203-aa240ed03026_30
- source_sentence: sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_14644
sentences:
- sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_16130
- This measurement was conducted with 10x 3' v3. Classical monocytes derived from
the blood of a female individual in her seventies.
- This measurement was conducted with 10x 5' v2. Sample is a CD8-positive, alpha-beta
memory T cell, specifically a cytotoxic T cell, from the lamina propria tissue
of an individual in her eighth decade of life.
datasets:
- jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on NeuML/pubmedbert-base-embeddings
results:
- task:
type: triplet
name: Triplet
dataset:
name: cellxgene pseudo bulk 100k multiplets natural language annotation cell
sentence 1
type: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1
metrics:
- type: cosine_accuracy
value: 0.5162578821182251
name: Cosine Accuracy
---
# SentenceTransformer based on NeuML/pubmedbert-base-embeddings
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) on the [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) <!-- at revision d6eaca8254bc229f3ca42749a5510ae287eb3486 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation)
- **Language:** code
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): MMContextEncoder(
(text_encoder): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0-11): 12 x BertLayer(
(attention): BertAttention(
(self): BertSdpaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(text_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=768, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=1024, bias=True)
(3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(pooling): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-geneformer-100k_adapter")
# Run inference
sentences = [
'sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_14644',
"This measurement was conducted with 10x 5' v2. Sample is a CD8-positive, alpha-beta memory T cell, specifically a cytotoxic T cell, from the lamina propria tissue of an individual in her eighth decade of life.",
"This measurement was conducted with 10x 3' v3. Classical monocytes derived from the blood of a female individual in her seventies.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.2246, -0.1095],
# [-0.2246, 1.0000, 0.9513],
# [-0.1095, 0.9513, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.5163** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [9916878](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/9916878bbf20fb8f9d6a0be4c997236e027cabd4)
* Size: 81,143 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.72 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 92 characters</li><li>mean: 216.13 characters</li><li>max: 900 characters</li></ul> | <ul><li>min: 101 characters</li><li>mean: 215.14 characters</li><li>max: 870 characters</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.75 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------|
| <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_26009</code> | <code>This measurement was conducted with 10x 3' v2. A proliferating lymphocyte cell sample, obtained from a 34-year-old female Asian individual, derived from peripheral blood mononuclear cells.</code> | <code>This measurement was conducted with 10x 3' v2. Sample is a 25-year-old female with European ethnicity, having CD8-positive, alpha-beta T cell type. This cell type exhibits elevated expression of type 1 interferon-stimulated genes (ISGs) in monocytes, reduction of naรฏve CD4+ T cells correlating with monocyte ISG expression, and expansion of repertoire-restricted cytotoxic GZMH+ CD8+ T cells.</code> | <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_14165</code> |
| <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_6333</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a cell from the omentum tissue, specifically an effector memory CD4-positive, alpha-beta T cell, from a female in her sixth decade.</code> | <code>This measurement was conducted with 10x 5' v2. Conventional dendritic cell from the jejunal epithelium of a female in her eighth decade.</code> | <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_2714</code> |
| <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_271</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG).</code> | <code>This measurement was conducted with 10x 3' v3. Neuron from the thalamic complex (thalamus, posterior nuclear complex of thalamus, medial geniculate nuclei) of a 42-year-old male, identified as a midbrain-derived inhibitory neuron.</code> | <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_425</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [9916878](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/9916878bbf20fb8f9d6a0be4c997236e027cabd4)
* Size: 9,011 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.73 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 99 characters</li><li>mean: 209.99 characters</li><li>max: 941 characters</li></ul> | <ul><li>min: 102 characters</li><li>mean: 213.87 characters</li><li>max: 981 characters</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.73 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------|
| <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_490</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 43-year-old male of European ethnicity with a reported history of kidney cancer. The cell type is identified as a kidney collecting duct intercalated cell.</code> | <code>This measurement was conducted with 10x 3' v3. Kidney collecting duct intercalated cell from a 43-year old European male with kidney cancer, taken from the cortex of kidney and cryopreserved for further analysis.</code> | <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_9</code> |
| <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_269</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male cerebellum, specifically from the Cerebellar Vermis - CBV region, with European self-reported ethnicity, analyzed at the nucleus level.</code> | <code>This measurement was conducted with 10x 3' v3. Endothelial cells derived from the cerebellum (specifically, cerebellar vermis) of a 42-year-old male, classified under the vascular supercluster term.</code> | <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_923</code> |
| <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_10258</code> | <code>This measurement was conducted with 10x 5' v1. Cell sample from the tonsil of a 9-year-old female with recurrent tonsillitis, characterized as a centroblast B cell with IGLC2, IGLV7-43, IGLJ3 immunoglobulin genes expressed.</code> | <code>This measurement was conducted with 10x 5' v1. Centroblast cells derived from a 3-year-old male human tonsil sample, with obstructive sleep apnea and recurrent tonsillitis, undergoing affinity maturation and differentiation into memory or plasma cells.</code> | <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_9654</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 0.05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `bf16`: True
- `gradient_checkpointing`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | cellxgene pseudo bulk 100k multiplets natural language annotation loss | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_cosine_accuracy |
|:------:|:----:|:-------------:|:----------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------:|
| 0.3155 | 100 | 4.3009 | 20.4535 | 0.5063 |
| 0.6309 | 200 | 3.2356 | 22.4190 | 0.5055 |
| 0.9464 | 300 | 2.9358 | 19.8626 | 0.5072 |
| 1.2618 | 400 | 2.7478 | 19.9669 | 0.5104 |
| 1.5773 | 500 | 2.634 | 18.4317 | 0.5134 |
| 1.8927 | 600 | 2.554 | 17.2588 | 0.5163 |
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0.dev0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.9.0
- Datasets: 2.19.1
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755698997
|
lilTAT
| 2025-08-20T14:10:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:10:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755698878
|
roeker
| 2025-08-20T14:09:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:08:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755697210
|
hakimjustbao
| 2025-08-20T14:06:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:06:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755698712
|
yaelahnal
| 2025-08-20T14:06:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:06:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755696993
|
vwzyrraz7l
| 2025-08-20T14:06:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:06:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rembot/Westbot
|
rembot
| 2025-08-20T14:05:35Z | 0 | 0 | null |
[
"en",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T13:56:32Z |
---
license: apache-2.0
language:
- en
base_model:
- microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755698685
|
lilTAT
| 2025-08-20T14:05:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:05:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Khawarizmi-SPI-MLP-8B-GGUF
|
mradermacher
| 2025-08-20T14:04:16Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"khawarizmiai",
"en",
"base_model:khawarizmiai/Khawarizmi-SPI-MLP-8B",
"base_model:quantized:khawarizmiai/Khawarizmi-SPI-MLP-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T01:57:38Z |
---
base_model: khawarizmiai/Khawarizmi-SPI-MLP-8B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- merge
- khawarizmiai
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/khawarizmiai/Khawarizmi-SPI-MLP-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Khawarizmi-SPI-MLP-8B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Khawarizmi-SPI-MLP-8B-GGUF/resolve/main/Khawarizmi-SPI-MLP-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF
|
mradermacher
| 2025-08-20T14:03:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"ja",
"en",
"dataset:japanese-receipts",
"base_model:sabaridsnfuji/Japanese-Receipt-VL-lfm2-450M",
"base_model:quantized:sabaridsnfuji/Japanese-Receipt-VL-lfm2-450M",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T15:57:20Z |
---
base_model: sabaridsnfuji/Japanese-Receipt-VL-lfm2-450M
datasets:
- japanese-receipts
language:
- ja
- en
library_name: transformers
license: apache-2.0
model_name: lfm2-vl-med
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- sft
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/sabaridsnfuji/Japanese-Receipt-VL-lfm2-450M
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Japanese-Receipt-VL-lfm2-450M-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.2 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.mmproj-f16.gguf) | mmproj-f16 | 0.3 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Japanese-Receipt-VL-lfm2-450M-GGUF/resolve/main/Japanese-Receipt-VL-lfm2-450M.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
roeker/blockassist-bc-quick_wiry_owl_1755698561
|
roeker
| 2025-08-20T14:03:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:03:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jasonhuang3/bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k
|
jasonhuang3
| 2025-08-20T14:02:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T17:39:46Z |
---
base_model: Qwen/Qwen2.5-Math-7B
library_name: transformers
model_name: bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jasonhuang3/bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jasonhuang3-school/huggingface/runs/jcdwzlxa)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.1
- Pytorch: 2.4.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Behzadshomali/15_45_57
|
Behzadshomali
| 2025-08-20T14:00:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Behzadshomali/Teuken3.7B",
"base_model:finetune:Behzadshomali/Teuken3.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T13:46:34Z |
---
base_model: Behzadshomali/Teuken3.7B
library_name: transformers
model_name: '15_45_57'
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 15_45_57
This model is a fine-tuned version of [Behzadshomali/Teuken3.7B](https://huggingface.co/Behzadshomali/Teuken3.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Behzadshomali/15_45_57", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/behzadshomali/Teuken3.73T_IT_grade-school-math/runs/kdevgxqs)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755698388
|
lilTAT
| 2025-08-20T14:00:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:00:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
silviaherranz/espacio
|
silviaherranz
| 2025-08-20T13:59:43Z | 0 | 0 | null |
[
"image-to-image-translation",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-08-08T12:36:07Z |
---
language:
- en
pipeline_tag: image-to-image-translation
license: apache-2.0
---
# Model Card 0.5
---
## 0. Card Metadata
**Creation date**: 2025/08/06
### Versioning
- **Version number**: 0.5
- **Version changes**: new limitations
**DOI**: 4567894793743589
---
## 1. Model Basic Information
**Name**: Modeloguay123
**Creation date**: 2022/08/03
### Versioning
- **Version number**: 33.77.4567
- **Version changes**: new page added
**DOI**: 115483.hh5.4
### Model scope
- **Summary**: auto-segmentation model
- **Anatomical site**: Thorax
### Clearance
- **Type**: Approved for medical use
#### Approved by
- **Name(s)**: Ana
- **Institution(s)**: UCLouvain
- **Contact email(s)**: ana@gmail.com
**Intended users**: Radiation oncologists
**Observed limitations**: None
**Type of learning architecture**: Random Forest
### Developed by
- **Name**: Silvia
- **Institution(s)**: uam
- **Contact email(s)**: silvia@uam.es
**Conflict of interest**: NA
**Software licence**: apache-2.0
---
## 2. Technical specifications
### 2.1 Model overview
#### Model pipeline
- **Summary:** CT images are blue
- **Model inputs:** ['CT']
- **Model outputs:** ['RTSTRUCT_Acetabulums', 'CBCT']
- **Pre-processing:** cropping the body
- **Post-processing:** hole-filling
---
### 2.2 Learning architecture(s)
#### Learning architecture 1
- **Total number of trainable parameters:** 4000000
- **Number of inputs:** 5
- **Input content:**
- **Input size:** [128]
- **Number of outputs:** 1
- **Output content:**
- **Output size:** [128, 56]
- **Loss function:** MSE
- **Batch size:** None
- **Regularisation:** None
- **Uncertainty quantification techniques:** Monte Carlo dropout
- **Explainability techniques:** LIME
---
### 2.3 Hardware & software
- **Libraries and dependencies:** Pytorch 3.9
---
## 3. Training Data Methodology and Information
#### Fine tuned form
- **Model name:** NA
- **URL/DOI to model card:** NA
- **Tuning technique:** NA
#### Training Dataset
##### General information
- **Total size:** [80]
- **Number of patients:** 7
- **Source:** Private dataset from ClinicsX
- **Acquisition period:** March 2025-August 2025
- **Inclusion / exclusion criteria:** Males were excluded
- **Type of data augmentation:** Flipping [left - right]
- **Strategy for data augmentation:** random
##### Technical specifications
- **CT** (model_inputs)
- **Image resolution:** NA
- **Patient positioning:** NA
- **Scan(s) manufacturer and model:** NA
- **Scan acquisition parameters:** NA
- **Scan reconstruction parameters:** NA
- **FOV:** NA
- **RTSTRUCT_Acetabulums** (model_outputs)
- **Image resolution:** [5.9, 7.6, 3.0]
- **Patient positioning:** Supine
- **Scan(s) manufacturer and model:** NA
- **Scan acquisition parameters:** NA
- **Scan reconstruction parameters:** NA
- **FOV:** NA
- **CBCT** (model_outputs)
- **Image resolution:** [5.9, 7.6, 3.0]
- **Patient positioning:** head to toes
- **Scan(s) manufacturer and model:** NA
- **Scan acquisition parameters:** NA
- **Scan reconstruction parameters:** NA
- **FOV:** [5.9, 7.6]
- **Reference standard:** NA
- **Reference standard QA:** delineations corrected by 3 doctors
##### Patient demographics and clinical characteristics
- **Age:** [7.5, 6.8]
- **Sex:** 60% F 40% M
**Validation strategy:** Cross-validation
**Validation data partition:** [20%]
**Weights initialization:** Uniform
**Model choice criteria:** last epoch
**Inference method:** single fold
---
## 4. Evaluation Data Methodology, Results and Commissioning
### 1 Siemens sample evaluation
**Evaluation date:** 2025/08/05
#### Evaluated by
- **Name(s):** Ana
- **Institution(s):** UCLouvain
- **Contact email(s):** ana@gmail.com
- **Same as 'Approved by':** Yes
**Evaluation frame:** retrospective
**Sanity check:** Model tested on a set of known images
#### Evaluation dataset
##### General information
- **Total size:** [577, 567]
- **Number of patients:** 7
- **Source:** public dataset from ucm
- **Acquisition period:** March 2023- April 2024
- **Inclusion / Exclusion criteria:** children excluded
- **URL info:** None
##### Technical specifications
- **CT** (model_inputs)
- **Image resolution:** NA
- **Patient positioning:** NA
- **Scanner model:** NA
- **Scan acquisition parameters:** NA
- **Scan reconstruction parameters:** NA
- **Fov:** NA
- **RTSTRUCT_Acetabulums** (model_outputs)
- **Image resolution:** [5.9, 7.6, 3.0]
- **Patient positioning:** supine
- **Scanner model:** NA
- **Scan acquisition parameters:** NA
- **Scan reconstruction parameters:** NA
- **Fov:** NA
- **CBCT** (model_outputs)
- **Image resolution:** NA
- **Patient positioning:** NA
- **Scanner model:** NA
- **Scan acquisition parameters:** NA
- **Scan reconstruction parameters:** NA
- **Fov:** NA
- **Reference standard:** NA
- **Reference standard QA:** NA
- **Additional information:** NA
##### Patient demographics and clinical characteristics
- **Age:** [5.9, 7.6]
- **Sex:** 100% F
#### Quantitative evaluation
##### Image Similarity Metrics
**SSIM (Structural Similarity Index)**
| Field | Value |
|---|---|
| Type | SSIM (Structural Similarity Index) |
| On Volume | AirWay_Dist |
| Registration | NONRIGID |
| Sample Data | None |
| Mean Data | [5.9, 7.6, 3.0, 5.3] |
| Figure Appendix Label | |
##### Dose Metrics
**GPR (Gamma Passing Rate)**
| Field | Value |
|---|---|
| Type | GPR (Gamma Passing Rate) |
| Metric Specifications | None |
| On Volume | Bone_Mastoid |
| Registration | NONE |
| Treatment Modality | External beam radiation therapy (EBRT) - Protons - Scanning beam single-field optimization |
| Dose Engine | Collapsed cone convolution |
| Dose Grid Resolution | [5.9, 7.6, 3.0] |
| TPS Vendor | RayStation |
| Sample Data | None |
| Mean Data | [5.9, 7.6, 3.0, 6.7] |
| Figure Appendix Label | |
#### Qualitative evaluation
**Evaluators information:**
**Likert scoring**
- Method:
- Results:
**Turing test**
- Method:
- Results:
**Time saving**
- Method:
- Results:
**Other**
- Method:
- Results:
**Explainability:**
**Citation details:**
---
## 5. Other considerations
_No other considerations provided._
---
|
unitova/blockassist-bc-zealous_sneaky_raven_1755696710
|
unitova
| 2025-08-20T13:59:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:59:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755696720
|
calegpedia
| 2025-08-20T13:58:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:58:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755698210
|
Vasya777
| 2025-08-20T13:57:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:57:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1755698088
|
DiFors
| 2025-08-20T13:57:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:57:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-gmq-gmq-ctranslate2-android
|
manancode
| 2025-08-20T12:29:03Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:28:57Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gmq-gmq-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gmq-gmq` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gmq-gmq
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-gmq-en-ctranslate2-android
|
manancode
| 2025-08-20T12:28:54Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:28:45Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gmq-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gmq-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gmq-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
brAInwav/GLM-4.5-mlx-4Bit
|
brAInwav
| 2025-08-20T12:28:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"mlx",
"conversational",
"en",
"zh",
"base_model:zai-org/GLM-4.5",
"base_model:quantized:zai-org/GLM-4.5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-20T12:01:58Z |
---
language:
- en
- zh
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- mlx
base_model: zai-org/GLM-4.5
---
# brAInwav/GLM-4.5-mlx-4Bit
The Model [brAInwav/GLM-4.5-mlx-4Bit](https://huggingface.co/brAInwav/GLM-4.5-mlx-4Bit) was converted to MLX format from [zai-org/GLM-4.5](https://huggingface.co/zai-org/GLM-4.5) using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("brAInwav/GLM-4.5-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
manancode/opus-mt-gil-sv-ctranslate2-android
|
manancode
| 2025-08-20T12:28:08Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:27:59Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gil-sv-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gil-sv` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gil-sv
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
gajahgajah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-climbing_armored_tarantula
|
gajahgajah
| 2025-08-20T12:27:39Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am climbing_armored_tarantula",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T12:09:15Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am climbing_armored_tarantula
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manancode/opus-mt-gem-en-ctranslate2-android
|
manancode
| 2025-08-20T12:26:54Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:26:45Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gem-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gem-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gem-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Youtu-RAG/CoDi-Embedding-V1
|
Youtu-RAG
| 2025-08-20T12:26:50Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"minicpm",
"sentence-similarity",
"custom_code",
"en",
"zh",
"base_model:openbmb/MiniCPM-Embedding",
"base_model:finetune:openbmb/MiniCPM-Embedding",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-20T12:09:56Z |
---
language:
- en
- zh
base_model:
- openbmb/MiniCPM-Embedding
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
## CoDi-Embedding-V1
CoDi-Embedding-V1 is an outstanding embedding model that supports both Chinese and English retrieval, with particularly exceptional performance in Chinese retrieval. It has achieved SOTA results on the Chinese MTEB benchmark as of August 20, 2025. Based on the [MiniCPM-Embedding](https://huggingface.co/openbmb/MiniCPM-Embedding) model, CoDi-Embedding-V1 extends the maximum sequence length from 512 to 4,196 tokens, significantly enhancing its capability for long-document retrieval. The model employs mean pooling strategy, where tokens from the instruction are excluded during pooling to optimize retrieval effectiveness.
### Model Description
- **Maximum Sequence Length:** 4096 tokens
- **Output Dimensionality:** 2304
- **Model Size:** 2.4B
## Requirements
```
transformers>=4.37.2
```
## Usage
### Sentence Transformers
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer(model_name_or_path)
queries = ["็ป็ฎไธๅก็ณป็ป็จๆทไฝฟ็จ"]
documents = [
"ๆ นๆฎ่งฃๅปๆฅ่พๅ
ฅ่ๅด๏ผๆฅ่ฏขๅบ่ฏฅๆถ้ด่ๅดๅ
ๅฐๆ็่ดฆๆทๅป็ปๅ่กจใ",
"ๆบ่ฝๅฎๆๅญๆฌพๅฐๆๆฅไธบ่ๅๆฅๆถๅค็โ่ฎพ็ฝฎๆๅๆ้กบๅปถ๏ผๆฏๆๆบ่ฝๅฎๆ่ฏๅฎไนฆๆๅๆ้กบๅปถๅฐๆๆ้ใ",
"่ดฆๆทๅผๆทๆถ่ฎพ็ฝฎไบ่ดฆๆทๅฐๆๆฅ๏ผ่ดฆๆทๅฐๆๆ้ๆฏๆ นๆฎๅ
จๆบๆ็ณป็ปๅๆฐ่ฎพ็ฝฎ"
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Get the similarity scores for the embeddings
similarity = model.similarity(query_embeddings, document_embeddings)
print(similarity)
```
|
manancode/opus-mt-gaa-en-ctranslate2-android
|
manancode
| 2025-08-20T12:25:51Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:25:42Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gaa-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gaa-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gaa-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fse-fi-ctranslate2-android
|
manancode
| 2025-08-20T12:25:16Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:25:07Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fse-fi-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fse-fi` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fse-fi
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-zne-ctranslate2-android
|
manancode
| 2025-08-20T12:25:05Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:24:56Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-zne-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-zne` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-zne
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
elsvastika/blockassist-bc-arctic_soaring_weasel_1755690277
|
elsvastika
| 2025-08-20T12:25:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:24:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755691224
|
sampingkaca72
| 2025-08-20T12:24:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:24:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-fr-yo-ctranslate2-android
|
manancode
| 2025-08-20T12:24:52Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:24:43Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-yo-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-yo` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-yo
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-vi-ctranslate2-android
|
manancode
| 2025-08-20T12:23:51Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:23:42Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-vi-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-vi` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-vi
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
anirudhsankar/tinyllama-finetuned-constitution
|
anirudhsankar
| 2025-08-20T12:23:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"lora",
"transformers",
"text-generation",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-20T12:23:12Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tags:
- base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: tinyllama-finetuned-constitution
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-finetuned-constitution
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9029 | 1.3409 | 500 | 0.9164 |
| 0.828 | 2.6819 | 1000 | 0.8671 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
manancode/opus-mt-fr-tvl-ctranslate2-android
|
manancode
| 2025-08-20T12:22:52Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:22:42Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-tvl-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-tvl` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-tvl
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-ts-ctranslate2-android
|
manancode
| 2025-08-20T12:22:28Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:22:17Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-ts-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-ts` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-ts
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-tn-ctranslate2-android
|
manancode
| 2025-08-20T12:21:49Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:21:40Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-tn-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-tn` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-tn
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
faris27/indobert-hoax-detection
|
faris27
| 2025-08-20T12:21:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"indobert",
"indonesian",
"hoax-detection",
"id",
"dataset:mochamadabdulazis/deteksi-berita-hoaks-indo-dataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T05:58:38Z |
---
language: id
license: mit
library_name: transformers
pipeline_tag: text-classification
tags:
- indobert
- indonesian
- hoax-detection
- text-classification
datasets:
- mochamadabdulazis/deteksi-berita-hoaks-indo-dataset
---
# IndoBERT - Deteksi Hoaks Berita Indonesia
## Deskripsi Model
Model ini adalah versi *fine-tuned* dari `indobenchmark/indobert-base-p1` yang dilatih secara spesifik untuk tugas klasifikasi teks pada berita berbahasa Indonesia. Tujuannya adalah untuk mengklasifikasikan sebuah artikel berita ke dalam dua kategori: **Fakta (LABEL_0)** atau **Hoaks (LABEL_1)**.
Proyek ini dikembangkan sebagai bagian dari portofolio pribadi untuk mendemonstrasikan alur kerja MLOps dari pengumpulan data, analisis (EDA), pelatihan model *baseline* & *advanced*, hingga publikasi model.
## Cara Penggunaan
Anda bisa menggunakan model ini dengan mudah menggunakan *pipeline* `text-classification` dari *library* `transformers`.
```python
from transformers import pipeline
# Ganti dengan nama repositori Anda
repo_name = "[nama-user-huggingface-anda/nama-model-anda]"
# Inisialisasi pipeline
classifier = pipeline("text-classification", model=repo_name)
# Contoh teks berita hoaks
teks_hoaks = "Pemerintah akan segera membagikan bantuan kuota internet sebesar 500GB untuk semua pelajar dan mahasiswa yang berlaku selama 1 tahun penuh. Cukup klik link berikut untuk mengklaimnya."
# Contoh teks berita fakta
teks_fakta = "Menteri Keuangan Sri Mulyani Indrawati memproyeksi pertumbuhan ekonomi Indonesia hanya akan mencapai 2,3 persen pada tahun ini. Proyeksi itu lebih rendah dari asumsi makro dalam APBN 2020 sebesar 5,3 persen."
# Lakukan prediksi
hasil = classifier([teks_hoaks, teks_fakta])
print(hasil)
```
## Data Pelatihan
Model ini dilatih pada dataset gabungan dari 4 sumber berita (CNN, Detik, Kompas, dan TurnBackHoax.id.) yang dikompilasi oleh Wersbo dan Mochamad Abdul Azis dan tersedia di Kaggle.
Total Data: 24,592 artikel berita
Distribusi: Dataset ini cukup seimbang dengan komposisi sekitar 51.6% berita Fakta dan 48.4% berita Hoaks.
## Prosedur Pelatihan
Model ini di-fine-tune selama 3 epoch dengan ukuran batch 8. Proses pelatihan menggunakan optimizer AdamW dengan learning rate awal 5e-5. Teks diproses menggunakan tokenizer IndoBERT dengan panjang maksimal 512 token.
## Hasil Evaluasi
Evaluasi dilakukan pada 20% data uji (4,919 artikel) yang tidak pernah dilihat oleh model selama pelatihan. Model ini menunjukkan peningkatan performa yang signifikan dibandingkan dengan model baseline (TF-IDF + Naive Bayes).
Akurasi: 99.84%
F1-Score (untuk kelas Hoaks): 0.99
## Kegunaan & Keterbatasan
Kegunaan: Model ini ditujukan untuk tujuan edukasi dan sebagai alat bantu awal untuk mengidentifikasi potensi disinformasi.
Keterbatasan: Model ini bukanlah "detektor kebenaran" absolut. Performanya sangat bergantung pada pola bahasa yang ada di data latih. Model ini mungkin keliru pada jenis hoaks yang baru, sarkasme, atau topik yang sangat spesifik. Hasil dari model ini harus selalu dianggap sebagai indikasi awal, bukan sebagai keputusan final. Selalu lakukan verifikasi silang ke sumber yang kredibel.
## Author
Dibuat oleh Faris Alfarizi. Lihat proyek lengkapnya di GitHub: https://github.com/farisalfrz/ril-or-fek-project.
|
manancode/opus-mt-fr-tll-ctranslate2-android
|
manancode
| 2025-08-20T12:21:37Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:21:28Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-tll-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-tll` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-tll
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
VIDEOS-18-AFRIN-ER-VIRAL-VIDEO-LINK/WATCH.FULL.VIDEOS.AFRIN.ER.LINK.VIRAL.1.24.VIRAL.AFRIN.AR.LINK
|
VIDEOS-18-AFRIN-ER-VIRAL-VIDEO-LINK
| 2025-08-20T12:21:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T12:21:04Z |
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐จ๐๐๐ฃ ๐ช๐ฅ ๐๐ฃ๐ ๐ฌ๐๐ฉ๐๐ ๐๐ช๐ก๐ก ๐ซ๐๐๐๐ค ๐๐ฟ)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
manancode/opus-mt-fr-sv-ctranslate2-android
|
manancode
| 2025-08-20T12:20:50Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:20:39Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-sv-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-sv` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-sv
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-st-ctranslate2-android
|
manancode
| 2025-08-20T12:20:36Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:20:26Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-st-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-st` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-st
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.