modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 18:26:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 558
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 18:25:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
gcos/pyaptamer-aptatrans
|
gcos
| 2025-08-18T18:13:06Z | 0 | 0 | null |
[
"license:bsd-3-clause",
"region:us"
] | null | 2025-08-18T17:48:24Z |
---
license: bsd-3-clause
---
|
Datasmartly/Data_chat_maroc2
|
Datasmartly
| 2025-08-18T17:49:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T16:37:54Z |
---
library_name: transformers
license: gemma
base_model: google/gemma-2-9b-it
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Data_chat_maroc2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Data_chat_maroc2
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.0.1
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755537020
|
mang3dd
| 2025-08-18T17:36:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T17:36:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/nuclear-hazard-style-lora-flux-sd-xl-pony
|
Muapi
| 2025-08-18T17:27:48Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T17:27:40Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# ☢️☣️Nuclear Hazard Style LoRA [FLUX+SD+XL+Pony]☣️☢️

**Base model**: Flux.1 D
**Trained words**: NuclearHazard
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:535810@921849", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/dictionnaire-infernal-louis-le-breton-style-1.5-xl-flux-pony
|
Muapi
| 2025-08-18T17:27:29Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T17:27:16Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Dictionnaire Infernal (Louis Le Breton Style) 1.5,XL,Flux,Pony

**Base model**: Flux.1 D
**Trained words**: llbreton
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:180879@946285", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755535563
|
Sayemahsjn
| 2025-08-18T17:08:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T17:07:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755535288
|
sampingkaca72
| 2025-08-18T17:06:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T17:06:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stuser2023/Llama-3.2-1B-couplet
|
stuser2023
| 2025-08-18T17:04:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T16:56:40Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: Llama-3.2-1B-couplet
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for Llama-3.2-1B-couplet
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="stuser2023/Llama-3.2-1B-couplet", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jo-mengr/mmcontext-pubmedbert-scvi_fm-v4
|
jo-mengr
| 2025-08-18T17:01:49Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:197351",
"loss:MultipleNegativesRankingLoss",
"code",
"dataset:jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation",
"dataset:jo-mengr/descriptions_genes",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:NeuML/pubmedbert-base-embeddings",
"base_model:finetune:NeuML/pubmedbert-base-embeddings",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-18T17:01:28Z |
---
language:
- code
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:197351
- loss:MultipleNegativesRankingLoss
base_model: NeuML/pubmedbert-base-embeddings
widget:
- source_sentence: ABCB7
sentences:
- This gene encodes a tetrameric mitochondrial flavoprotein, which is a member of
the acyl-CoA dehydrogenase family. This enzyme catalyzes the initial step of the
mitochondrial fatty acid beta-oxidation pathway. Mutations in this gene have been
associated with short-chain acyl-CoA dehydrogenase (SCAD) deficiency. Alternative
splicing results in two variants which encode different isoforms. [provided by
RefSeq, Oct 2014]
- The membrane-associated protein encoded by this gene is a member of the superfamily
of ATP-binding cassette (ABC) transporters. ABC proteins transport various molecules
across extra- and intra-cellular membranes. ABC genes are divided into seven distinct
subfamilies (ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, White). This protein is a member
of the MDR/TAP subfamily. Members of the MDR/TAP subfamily are involved in multidrug
resistance as well as antigen presentation. This gene encodes a half-transporter
involved in the transport of heme from the mitochondria to the cytosol. With iron/sulfur
cluster precursors as its substrates, this protein may play a role in metal homeostasis.
Mutations in this gene have been associated with mitochondrial iron accumulation
and isodicentric (X)(q13) and sideroblastic anemia. Alternatively spliced transcript
variants encoding multiple isoforms have been observed for this gene. [provided
by RefSeq, Nov 2012]
- The membrane-associated protein encoded by this gene is a member of the superfamily
of ATP-binding cassette (ABC) transporters. ABC proteins transport various molecules
across extra- and intracellular membranes. ABC genes are divided into seven distinct
subfamilies (ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, and White). This encoded protein
is a member of the ABC1 subfamily. Members of the ABC1 subfamily comprise the
only major ABC subfamily found exclusively in multicellular eukaryotes. This gene
is clustered among 4 other ABC1 family members on 17q24, but neither the substrate
nor the function of this gene is known. Alternative splicing of this gene results
in several transcript variants; however, not all variants have been fully described.
[provided by RefSeq, Jul 2008]
- source_sentence: ABCC8
sentences:
- The protein encoded by this gene is a member of the superfamily of ATP-binding
cassette (ABC) transporters. ABC proteins transport various molecules across extra-
and intra-cellular membranes. ABC genes are divided into seven distinct subfamilies
(ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, White). This protein is a member of the
MRP subfamily which is involved in multi-drug resistance. This protein functions
as a modulator of ATP-sensitive potassium channels and insulin release. Mutations
in the ABCC8 gene and deficiencies in the encoded protein have been observed in
patients with hyperinsulinemic hypoglycemia of infancy, an autosomal recessive
disorder of unregulated and high insulin secretion. Mutations have also been associated
with non-insulin-dependent diabetes mellitus type II, an autosomal dominant disease
of defective insulin secretion. Alternatively spliced transcript variants have
been found for this gene. [provided by RefSeq, Jul 2020]
- Predicted to enable GTPase activator activity and zinc ion binding activity. Predicted
to be involved in protein transport. Located in membrane. [provided by Alliance
of Genome Resources, Jul 2025]
- The protein encoded by this gene is a member of the superfamily of ATP-binding
cassette (ABC) transporters. ABC proteins transport various molecules across extra-
and intra-cellular membranes. ABC genes are divided into seven distinct subfamilies
(ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, White). This ABC full transporter is a
member of the MRP subfamily which is involved in multi-drug resistance. The product
of this gene participates in physiological processes involving bile acids, conjugated
steroids, and cyclic nucleotides. In addition, a SNP in this gene is responsible
for determination of human earwax type. This gene and family member ABCC12 are
determined to be derived by duplication and are both localized to chromosome 16q12.1.
Multiple alternatively spliced transcript variants have been described for this
gene. [provided by RefSeq, Jul 2008]
- source_sentence: MALAT1 TMSB4X ACTB TPT1 EEF1A1 S100A10 LGALS1 VIM SH3BGRL3 S100A4
FTL PTMA SRGN TMSB10 CYBA GAPDH CD74 TAGLN2 FTH1 S100A6 UBA52 YBX1 MYL6 OAZ1 CST3
NACA FAU ARPC2 GSTP1 PFN1 HSP90AA1 COTL1 PPIA ARPC3 UQCRB MYL12A CD63 EIF1 NEAT1
RACK1 MACROH2A1 ATP6V0E1 ATP5F1E SRP14 ENO1 SLC25A3 CTSH PRDX1 VAMP8 COX4I1 CAP1
BTF3 DBI HNRNPA3 GNAS DDX5 H3-3B TPM3 LAPTM5 ZEB2 GNG5 FLNA CALM1 CD44
sentences:
- MALAT1 PTMA TMSB10 LGALS1 ACTB PRDX1 S100A4 H3-3B TMSB4X VIM TPT1 LMO4 HNRNPA2B1
SH3BGRL3 TAGLN2 HNRNPU DDIT4 PFN1 IGFBP7 HMGB1 FTH1 CFL1 CD74 SOX4 KLF2 BST2 S100A11
RACK1 PSMA4 DDX5 NCL RSRP1 IRF1 SERF2 EEF1A1 CALM1 UBA52 CYBA HSP90AA1 MYL12A
AHNAK ITM2B SRP14 EMP3 CALM2 TSC22D3 YWHAZ SELENOW PPIA S100A6 TSPO IRAG2 TPM3
UBC ARPC2 HNRNPA3 UBB EIF1 JUN IFITM2 PRR13 N4BP2L2 LAPTM4A CDC42
- This measurement was conducted with 10x 3' v3. This sample is derived from a 3-month-old
male patient with KMT2A-rearranged (KMT2A-r) infant acute lymphoblastic leukemia
(ALL) with a CD8_Cytotoxic T cell type, specifically T/NK cells, and a presumed
MLL-AF4 fusion.
- This measurement was conducted with 10x 3' v3. Blast cells derived from a 1-month-old
human with a presumed MLL-AF10 fusion, projected as cDC-like cells.
- source_sentence: MALAT1 CXCL14 EEF1A1 VIM IGFBP7 COL1A2 FTH1 TPT1 S100A6 TMSB4X
A2M APOE DCN PTGDS TMSB10 LGALS1 ACTB FBLN1 FTL RARRES2 CD81 CALD1 CD63 COL6A2
MYL6 SPARCL1 NEAT1 IGFBP5 PTMA CST3 FAU SERF2 SPARC IFITM3 EIF1 S100A4 NACA JUND
COL6A1 GSN C1S CFH HSP90AA1 PDLIM1 H3-3B EDIL3 UBA52 VCAN LTBP4 TIMP3 CTSC ITM2B
IGFBP4 UBC UBB RACK1 TIMP1 ACTA2 ZFP36L2 PLPP3 TUBA1A FILIP1L FOS S100A10
sentences:
- MALAT1 TMSB10 A2M FABP5 PTMA VIM ACTB CAV1 SPARCL1 CD74 EEF1A1 KLF2 IFITM3 CLDN5
TMSB4X TPT1 ENPP2 TM4SF1 FOS EIF1 S100A6 CALM1 CD81 HES1 SRGN ID1 GNG11 IGFBP4
STOM GSN TAGLN2 IGFBP7 CD320 FTH1 MCAM HSP90AA1 GNAS MYL6 TIMP3 EPAS1 TNFSF10
PODXL ITM2B SRP14 UBC TGFBR2 KCTD12 GIMAP7 UBA52 RHOA CD59 FTL PCSK5 MYH9 MYL12A
FLT1 CXCL12 LIFR TUBA1B DSTN ARPC1B JUND H3-3B TMBIM6
- This measurement was conducted with 10x 3' v3. Fibroblasts derived from the terminal
ileum of a female individual in her fourth decade, exhibiting Crohn's disease
(CD) related changes.
- This measurement was conducted with 10x 3' v3. Glial cells derived from the ileal
epithelium of a female in her fourth decade.
- source_sentence: MALAT1 DCN MGP APOD GSN LAMA2 CST3 SPARCL1 IGFBP7 TIMP1 VIM EEF1A1
ITM2B FBLN1 C3 IFITM3 FBN1 FTH1 TPT1 ABCA8 C1S TXNIP FTL TIMP3 FN1 CD63 RBMS3
ABCA6 ZBTB20 CEBPD NEAT1 CFH VCAN PTN PTGDS CD81 SERF2 COL6A1 COL6A2 ABI3BP ABCA10
EBF1 COL1A2 PRKG1 S100A6 MGST1 TMSB10 TIMP2 CELF2 LAPTM4A RORA ACTB LTBP4 MYL6
LGALS1 DDX5 SPTBN1 EFEMP1 BICC1 LRP1 H3-3B SCN7A IGFBP4 FAU
sentences:
- This measurement was conducted with 10x 3' v3. CD4+T naive lymphocyte cells derived
from the right cardiac atrium of a European male in his sixties.
- This measurement was conducted with 10x multiome. Fibroblast cell sample taken
from the right ventricle of a European female donor in her fifth decade, who is
a DCD donor. The sample is in nucleus form.
- MALAT1 NEAT1 LINC00486 SLC8A1 VMP1 SAT1 PIK3R5 DIRC3 FMN1 PMP22 RBM47 AGFG1 DIP2B
RBMS1 GNAQ TBC1D14 RAB1A ARHGAP24 DAPK1 SLC1A3 RHOQ SH3BGRL DOCK10 SLCO2B1 RUNX1
ENOX2 LDLRAD4 RNF150 PIAS1 DDX5 WSB1 TSHZ3 SBF2 DOCK2 LRP4 DENND4C FCHSD2 EXOC6B
AFF3 ARHGAP26 DIAPH2 MGAT5 TMEM163 NSMCE2 RBPJ ZEB2 TANC2 BPTF SH3RF3 MFSD14CP
TCF4 RORA-AS1 NOP58 MEF2A EPN2 PICALM ARHGAP15 MEF2C ANKRD12 FCGRT DOCK8 SETX
TBC1D9 KLHL2
datasets:
- jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
- jo-mengr/descriptions_genes
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on NeuML/pubmedbert-base-embeddings
results:
- task:
type: triplet
name: Triplet
dataset:
name: cellxgene pseudo bulk 100k multiplets natural language annotation cell
sentence 2
type: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2
metrics:
- type: cosine_accuracy
value: 0.7998002171516418
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: gene description
type: gene_description
metrics:
- type: cosine_accuracy
value: 0.8529999852180481
name: Cosine Accuracy
---
# SentenceTransformer based on NeuML/pubmedbert-base-embeddings
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) on the [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) and [gene_description](https://huggingface.co/datasets/jo-mengr/descriptions_genes) datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) <!-- at revision d6eaca8254bc229f3ca42749a5510ae287eb3486 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation)
- [gene_description](https://huggingface.co/datasets/jo-mengr/descriptions_genes)
- **Language:** code
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): MMContextEncoder(
(text_encoder): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0-11): 12 x BertLayer(
(attention): BertAttention(
(self): BertSdpaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(pooling): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-scvi_fm-v4")
# Run inference
sentences = [
'MALAT1 DCN MGP APOD GSN LAMA2 CST3 SPARCL1 IGFBP7 TIMP1 VIM EEF1A1 ITM2B FBLN1 C3 IFITM3 FBN1 FTH1 TPT1 ABCA8 C1S TXNIP FTL TIMP3 FN1 CD63 RBMS3 ABCA6 ZBTB20 CEBPD NEAT1 CFH VCAN PTN PTGDS CD81 SERF2 COL6A1 COL6A2 ABI3BP ABCA10 EBF1 COL1A2 PRKG1 S100A6 MGST1 TMSB10 TIMP2 CELF2 LAPTM4A RORA ACTB LTBP4 MYL6 LGALS1 DDX5 SPTBN1 EFEMP1 BICC1 LRP1 H3-3B SCN7A IGFBP4 FAU',
'This measurement was conducted with 10x multiome. Fibroblast cell sample taken from the right ventricle of a European female donor in her fifth decade, who is a DCD donor. The sample is in nucleus form.',
"This measurement was conducted with 10x 3' v3. CD4+T naive lymphocyte cells derived from the right cardiac atrium of a European male in his sixties.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.7055, 0.1018],
# [0.7055, 1.0000, 0.1736],
# [0.1018, 0.1736, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2` and `gene_description`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2 | gene_description |
|:--------------------|:----------------------------------------------------------------------------------|:-----------------|
| **cosine_accuracy** | **0.7998** | **0.853** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [d518eb2](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/d518eb24af305653b43acd9e26f9502632059e7c)
* Size: 81,143 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:--------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 356 characters</li><li>mean: 385.24 characters</li><li>max: 450 characters</li></ul> | <ul><li>min: 92 characters</li><li>mean: 216.13 characters</li><li>max: 900 characters</li></ul> | <ul><li>min: 103 characters</li><li>mean: 212.72 characters</li><li>max: 1186 characters</li></ul> | <ul><li>min: 353 characters</li><li>mean: 384.82 characters</li><li>max: 433 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>TMSB4X TMSB10 ACTB MALAT1 GNLY NKG7 IFITM2 LGALS1 GZMA EEF1A1 PFN1 HMGB2 FTH1 PTMA HSP90AA1 GZMB ARHGDIB HNRNPA2B1 PLAAT4 FAU CMC1 VIM MYL12A CBX3 ATP5F1E HCST IFI44L KLRF1 H3-3A COX6C ARL6IP1 CFL1 ISG15 HMGB1 S100A4 ATP5MF RORA MYL6 CORO1A OAZ1 KLRB1 ID2 HMGN3 CCNI RBM39 CAP1 SERF2 ELOC FCER1G S100A9 IFI16 YWHAZ EIF1 CALR HMGN2 SKAP2 SLC25A5 ZZZ3 YBX1 NUCB2 CDC42 GSTP1 FTL ATP5F1D</code> | <code>This measurement was conducted with 10x 3' v2. A proliferating lymphocyte cell sample, obtained from a 34-year-old female Asian individual, derived from peripheral blood mononuclear cells.</code> | <code>This measurement was conducted with 10x 3' v2. Sample is a CD8-positive, alpha-beta T cell derived from a 31-year-old Asian female's peripheral blood mononuclear cells.</code> | <code>MALAT1 TMSB4X EEF1A1 TMSB10 FAU TPT1 PTMA EIF1 UBA52 ACTB FTH1 RACK1 FTL H3-3B JUNB ATP5F1E BTG1 CD52 NACA MYL12A PFN1 COX7C COX4I1 SERF2 UQCRB TOMM7 IL32 YBX1 PABPC1 MYL6 EIF3E OAZ1 NOP53 ARHGDIB LDHB HCST SARAF ITM2B ATP6V1G1 SRP14 UBC H3-3A COX6C HINT1 UBB COMMD6 S100A4 S100A6 CALM1 VIM CYBA ENO1 HSP90AA1 FXYD5 HSP90AB1 CIRBP SRSF5 NFKBIA CORO1A LEPROTL1 TLE5 CHCHD2 DDX5 CD69</code> |
| <code>EEF1A1 MALAT1 FTH1 JUNB TPT1 FOS TMSB10 BTG1 TMSB4X ZFP36L2 NACA PABPC1 ACTB FAU VIM H3-3B EIF1 ZFP36 SARAF PTMA IL7R JUN RACK1 EEF2 UBA52 GAPDH FTL FXYD5 DUSP1 S100A4 CD69 CXCR4 UBC TSC22D3 CFL1 KLF6 ARHGDIB KLF2 BTG2 CITED2 IER2 TUBB4B CD3E EEF1G SLC2A3 NFKBIA PFN1 SRGN SNX9 COX4I1 DNAJB1 SERF2 CD8A PCBP2 IL32 BIRC3 SMAP2 FUS GADD45B MYL12A OAZ1 ATP5F1E TUBA4A PNRC1</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a cell from the omentum tissue, specifically an effector memory CD4-positive, alpha-beta T cell, from a female in her sixth decade.</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a CD4-positive helper T cell, specifically Trm_Th1/Th17 subset, derived from the duodenum tissue of a male individual in his sixth decade.</code> | <code>MALAT1 TPT1 EEF1A1 VIM JUND TMSB4X PTMA FTH1 CRIP1 ANXA1 EIF1 UBC H3-3B ACTB SRGN FTL FAU KLF6 IL7R CALM1 UBA52 BTG1 SARAF IL32 TMSB10 PABPC1 HSP90AB1 DDX5 GAPDH TAGLN2 NACA CD44 HSPA5 RORA HSP90AA1 KLRB1 TNFAIP3 ATP5F1E PNRC1 ZFP36L2 H3-3A UBB FOS RACK1 FYN FAM107B GNAS EZR MYL6 CREM NFKBIA PFN1 ARHGDIB SRSF7 CD2 CCNI HNRNPA2B1 COX7C ITM2B SERF2 SH3BGRL3 TSC22D3 LMNA YWHAZ</code> |
| <code>MALAT1 GRIK1 SYT1 PCDH9 RORA NRG1 CADPS ZFPM2 LRRC4C LINGO2 RALYL PTPRD SPHKAP CNTNAP5 SLC8A1 CCSER1 HDAC9 CELF2 R3HDM1 CNTN4 RBMS3 PCDH7 GALNT13 UNC5D ROBO1 SYNPR SNAP25 GPM6A ANK3 FRMPD4 CHRM2 RYR2 KHDRBS2 CADM1 CACNA1D RGS6 PDE4D DOCK4 UNC13C CDH18 FAT3 MEG3 NR2F2-AS1 HMCN1 GULP1 CAMK2D ZEB1 SYN2 DYNC1I1 OXR1 DPP10 OSBPL6 FRAS1 PPP3CA ZNF385D ZMAT4 PCBP3 HS6ST3 ERC2 PLEKHA5 CDK14 MAP2 NCOA1 ATP8A2</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG).</code> | <code>This measurement was conducted with 10x 3' v3. Astrocyte cell type from the thalamic complex, specifically from the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG) region, of a 42-year-old male.</code> | <code>MALAT1 PCDH9 PLP1 MBP ST18 QKI PDE4B RNF220 PTPRD SEPTIN7 TTLL7 NCKAP5 GPM6B PIP4K2A MOBP SLC44A1 PTGDS PLCL1 MAP7 ELMO1 SIK3 FTH1 ZBTB20 MAN2A1 TMEM165 DOCK10 TCF12 EDIL3 ZEB2 DPYD MAP4K4 PHLPP1 TF GAB1 TRIM2 FRMD4B DNAJC6 MARCHF1 ANK3 DST AGAP1 TMEM144 NEAT1 PLEKHH1 DLG1 CRYAB ERBIN RTN4 SPP1 ATP8A1 DOCK4 SLAIN1 APP DOCK5 APBB2 SAMD12 SHTN1 ZNF536 ZFYVE16 ARAP2 LIMCH1 HIPK2 BCAS1 FAM107B</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### gene_description
* Dataset: [gene_description](https://huggingface.co/datasets/jo-mengr/descriptions_genes) at [dd22363](https://huggingface.co/datasets/jo-mengr/descriptions_genes/tree/dd22363de0a7c501f41ba324fb3b8d6ecdd14dc7)
* Size: 116,208 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative_1</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 |
|:--------|:---------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 characters</li><li>mean: 5.88 characters</li><li>max: 12 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 367.09 characters</li><li>max: 1375 characters</li></ul> | <ul><li>min: 13 characters</li><li>mean: 167.33 characters</li><li>max: 1375 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 |
|:------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>A1BG antisense RNA 1</code> |
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>G antigen 12D</code> |
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>G antigen 12B</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Datasets
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [d518eb2](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/d518eb24af305653b43acd9e26f9502632059e7c)
* Size: 9,011 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 347 characters</li><li>mean: 386.7 characters</li><li>max: 437 characters</li></ul> | <ul><li>min: 99 characters</li><li>mean: 209.99 characters</li><li>max: 941 characters</li></ul> | <ul><li>min: 101 characters</li><li>mean: 208.8 characters</li><li>max: 728 characters</li></ul> | <ul><li>min: 356 characters</li><li>mean: 386.56 characters</li><li>max: 434 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>MALAT1 EEF1A1 FTH1 TMSB4X ACTB FTL RTN4 ATP6V0B TPT1 FAU S100A6 NDUFA4 ATP5F1E COX7C ITM2B IGFBP7 EIF1 C12orf75 CD9 COX7B SERF2 ATP1B1 COX8A TXNIP NDUFB2 MYL6 PPDPF COX6B1 UQCR11 APOE COX4I1 CALM2 UQCRB S100A11 UQCRQ COX6C ATP5MG BSG ATP6AP2 UQCR10 PTMA NACA UBL5 UBA52 TMSB10 ADGRF5 HSP90AA1 GSTP1 ATP5F1D CHCHD2 GAPDH COX7A2 SKP1 HSPE1 PRDX1 CYSTM1 LGALS3 CD63 ATP5MJ CKB NDUFS5 ATP5ME UBB MAL</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 43-year-old male of European ethnicity with a reported history of kidney cancer. The cell type is identified as a kidney collecting duct intercalated cell.</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 72-year-old male of European ethnicity, identified as a kidney collecting duct intercalated cell, and preserved through cryopreservation.</code> | <code>MALAT1 TMSB4X TMSB10 ACTB TXNIP EEF1A1 TPT1 PFN1 BTG1 FAU PTMA S100A4 ATP5F1E EIF1 FTL CFL1 CYBA MYL12A SRGN SERF2 SH3BGRL3 CALM1 TYROBP MYL6 ZFP36 KLRD1 UBB NACA S100A6 UBA52 HSP90AA1 H3-3B LCP1 FTH1 DDIT4 FOS PPIA CD247 RACK1 TMA7 CORO1A OAZ1 TLE5 ARPC3 GAPDH KLF2 UBC ZFP36L2 TSC22D3 ITGB2 ARPC2 ATP5MG HOPX IFITM2 HMGB1 OST4 EEF1G PRDM1 CDC42 GSTP1 NDUFB2 CIRBP LGALS1 CHCHD2</code> |
| <code>MALAT1 KCND2 NRXN1 CDH18 NRXN3 ZNF385D CADM2 RALYL NKAIN2 CADPS2 RIMS1 FSTL5 GRID2 TRPM3 CHN2 DPP6 JMJD1C RORA PDE1A UNC13C TIAM1 NRG1 SNAP25 ZFPM2 CALN1 LSAMP CNTN1 ABLIM1 SYNE1 ANK3 CA10 NFIA ZBTB20 NTM CADM1 OPCML RELN DNM3 NEBL ERC1 SCN2A PPP3CA CACNA1A GALNT13 LRRC4C GPM6A RABGAP1L RIT2 CAMK4 GRIA4 PTPRD RBFOX3 MCTP1 LHFPL6 PCLO MEG3 PDE10A NOVA1 RTN1 ZNF385B CNTN4 GABRB2 SPOCK1 OXR1</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male cerebellum, specifically from the Cerebellar Vermis - CBV region, with European self-reported ethnicity, analyzed at the nucleus level.</code> | <code>This measurement was conducted with 10x 3' v3. Sample is an oligodendrocyte precursor cell taken from the cerebellum tissue of a 42-year-old human male, specifically from the Cerebellum (CB) - Cerebellar Vermis - CBV dissection.</code> | <code>MALAT1 NRXN3 SNTG1 UNC5C GRIA4 NRG1 RORA INPP4B CLSTN2 NKAIN2 FRMD4A DPP6 GRID2 NRXN1 LSAMP JMJD1C HS6ST3 NXPH1 MIR99AHG LRRC4C NTM CCNH NFIA ZFPM2 AFF3 OPCML PTPRT CADM2 ZBTB20 OLFM3 SLC22A3 CNTNAP5 CACNA2D3 CNTN4 KCND2 ADARB2 XKR4 GPM6A IL1RAPL1 ALK ANKRD36C UBE2E2 SYN3 GARNL3 PTPRG DAB1 TCF4 LINC00461 PRANCR GRIN2B TNRC6B MAPK10 NOVA1 NFIB ANK3 KCNMA1 KCNQ5 SPON1 TRIM9 VWA8 GDAP1 GABRG2 AHI1 ATP1B1</code> |
| <code>EEF1A1 ACTB GAPDH HMGN2 PTMA SERF2 TMSB4X CD74 PABPC1 FTH1 TMSB10 FAU PFN1 HMGN1 OAZ1 HMGB1 TPT1 PPIA NACA BTF3 MALAT1 MYL6 ATP5MG CFL1 RACK1 ODC1 ATP5F1E TMA7 SLC25A5 ELOB ARPC3 NPM1 COX7C ANP32B C4orf3 EIF1 PCBP2 KLF6 LAPTM5 COX8A RHOA HSPA8 H3-3B PTP4A2 UBA52 OST4 CIRBP LGALS1 EIF3L STMN1 PPDPF COX4I1 RAN EIF3F PPP1CC COMMD6 NDUFA4 YBX1 PEBP1 COTL1 COX7A2 HSPE1 CCNI TRIR</code> | <code>This measurement was conducted with 10x 5' v1. Cell sample from the tonsil of a 9-year-old female with recurrent tonsillitis, characterized as a centroblast B cell with IGLC2, IGLV7-43, IGLJ3 immunoglobulin genes expressed.</code> | <code>This measurement was conducted with 10x 5' v1. Germinal center B cell derived from the tonsil tissue of a 3-year-old male with recurrent tonsillitis.</code> | <code>CD74 MALAT1 EEF1A1 SSR4 TPT1 UBC EEF2 SAT1 RACK1 SEC11C ATP5MG FAU TSC22D3 PPIB XBP1 FTL GAPDH HLA-DRB5 HERPUD1 RGS2 HSPA8 TMSB4X HSP90B1 EIF1 PTMA SERP1 SERF2 NACA SEC61B GSTP1 UBA52 HSPA5 BTF3 LAPTM5 HSPE1 H3-3B ATP5F1A SEC61G CD38 EDF1 FTH1 IL16 NPM1 OST4 CIRBP EIF3E OAZ1 CYTIP PCBP2 MYDGF COX6B1 ZFP36 CSDE1 PABPC1 REXO2 KDELR1 PFN1 PTP4A1 TMBIM6 H1-10 PSAP UBE2J1 VIM MYL6</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### gene_description
* Dataset: [gene_description](https://huggingface.co/datasets/jo-mengr/descriptions_genes) at [dd22363](https://huggingface.co/datasets/jo-mengr/descriptions_genes/tree/dd22363de0a7c501f41ba324fb3b8d6ecdd14dc7)
* Size: 1,000 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative_1</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 |
|:--------|:---------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 characters</li><li>mean: 5.88 characters</li><li>max: 12 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 367.09 characters</li><li>max: 1375 characters</li></ul> | <ul><li>min: 13 characters</li><li>mean: 167.33 characters</li><li>max: 1375 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 |
|:------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>A1BG antisense RNA 1</code> |
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>G antigen 12D</code> |
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>G antigen 12B</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `bf16`: True
- `gradient_checkpointing`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | cellxgene pseudo bulk 100k multiplets natural language annotation loss | gene description loss | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2_cosine_accuracy | gene_description_cosine_accuracy |
|:------:|:----:|:-------------:|:----------------------------------------------------------------------:|:---------------------:|:-------------------------------------------------------------------------------------------------:|:--------------------------------:|
| 0.0324 | 50 | 11.0051 | 19.5558 | 5.9316 | 0.5204 | 0.1620 |
| 0.0649 | 100 | 8.5524 | 18.3045 | 5.4608 | 0.5205 | 0.1780 |
| 0.0973 | 150 | 9.1555 | 15.2629 | 4.9639 | 0.5227 | 0.2030 |
| 0.1297 | 200 | 7.3013 | 12.0366 | 4.6165 | 0.5261 | 0.2950 |
| 0.1621 | 250 | 6.2449 | 8.6096 | 4.4175 | 0.5237 | 0.3670 |
| 0.1946 | 300 | 4.747 | 6.8549 | 4.2856 | 0.5323 | 0.4470 |
| 0.2270 | 350 | 4.1084 | 5.9229 | 4.1099 | 0.5433 | 0.5250 |
| 0.2594 | 400 | 3.7825 | 5.6144 | 3.9683 | 0.5774 | 0.5650 |
| 0.2918 | 450 | 3.2789 | 5.3758 | 3.7826 | 0.6078 | 0.6010 |
| 0.3243 | 500 | 3.3021 | 5.2209 | 3.6423 | 0.6263 | 0.6410 |
| 0.3567 | 550 | 3.263 | 5.0950 | 3.5284 | 0.6483 | 0.6680 |
| 0.3891 | 600 | 3.0911 | 4.9479 | 3.4288 | 0.6718 | 0.6870 |
| 0.4215 | 650 | 2.7839 | 4.9800 | 3.3718 | 0.6813 | 0.6980 |
| 0.4540 | 700 | 3.1002 | 4.8639 | 3.3248 | 0.6980 | 0.7180 |
| 0.4864 | 750 | 2.5449 | 4.8535 | 3.2514 | 0.7064 | 0.7220 |
| 0.5188 | 800 | 2.7304 | 4.7861 | 3.2215 | 0.7126 | 0.7390 |
| 0.5512 | 850 | 3.0358 | 4.6504 | 3.2075 | 0.7175 | 0.7350 |
| 0.5837 | 900 | 2.714 | 4.5892 | 3.1745 | 0.7241 | 0.7280 |
| 0.6161 | 950 | 2.5546 | 4.5810 | 3.1692 | 0.7286 | 0.7430 |
| 0.6485 | 1000 | 2.5849 | 4.6362 | 3.1320 | 0.7324 | 0.7550 |
| 0.6809 | 1050 | 2.5495 | 4.5253 | 3.1114 | 0.7401 | 0.7530 |
| 0.7134 | 1100 | 2.8216 | 4.5000 | 3.0808 | 0.7433 | 0.7620 |
| 0.7458 | 1150 | 2.3656 | 4.4587 | 3.0792 | 0.7448 | 0.7650 |
| 0.7782 | 1200 | 2.5788 | 4.4687 | 3.0736 | 0.7461 | 0.7640 |
| 0.8106 | 1250 | 2.6446 | 4.5501 | 3.0465 | 0.7505 | 0.7690 |
| 0.8431 | 1300 | 2.5293 | 4.4037 | 3.0937 | 0.7563 | 0.7620 |
| 0.8755 | 1350 | 2.3749 | 4.4344 | 3.0370 | 0.7573 | 0.7740 |
| 0.9079 | 1400 | 2.5008 | 4.3406 | 3.0262 | 0.7580 | 0.7900 |
| 0.9403 | 1450 | 2.3166 | 4.3110 | 3.0085 | 0.7584 | 0.7880 |
| 0.9728 | 1500 | 2.2504 | 4.4155 | 3.0176 | 0.7584 | 0.7870 |
| 1.0052 | 1550 | 2.4519 | 4.2985 | 2.9737 | 0.7620 | 0.7960 |
| 1.0376 | 1600 | 2.1929 | 4.3602 | 2.9951 | 0.7624 | 0.7920 |
| 1.0700 | 1650 | 2.1373 | 4.3623 | 2.9661 | 0.7645 | 0.8030 |
| 1.1025 | 1700 | 2.234 | 4.3367 | 2.9720 | 0.7663 | 0.8010 |
| 1.1349 | 1750 | 2.2874 | 4.3642 | 2.9487 | 0.7672 | 0.7950 |
| 1.1673 | 1800 | 2.0128 | 4.3900 | 2.9582 | 0.7666 | 0.7950 |
| 1.1997 | 1850 | 2.2543 | 4.3268 | 2.9273 | 0.7702 | 0.8010 |
| 1.2322 | 1900 | 2.1586 | 4.3627 | 2.9164 | 0.7744 | 0.8050 |
| 1.2646 | 1950 | 2.2073 | 4.3333 | 2.9377 | 0.7719 | 0.8040 |
| 1.2970 | 2000 | 2.1069 | 4.2816 | 2.9226 | 0.7777 | 0.8030 |
| 1.3294 | 2050 | 2.258 | 4.3574 | 2.9176 | 0.7765 | 0.8090 |
| 1.3619 | 2100 | 2.0805 | 4.2716 | 2.9050 | 0.7775 | 0.8100 |
| 1.3943 | 2150 | 2.1292 | 4.2750 | 2.8953 | 0.7794 | 0.8110 |
| 1.4267 | 2200 | 2.2603 | 4.2815 | 2.9089 | 0.7790 | 0.8000 |
| 1.4591 | 2250 | 2.2981 | 4.2431 | 2.8886 | 0.7819 | 0.8010 |
| 1.4916 | 2300 | 2.1191 | 4.2329 | 2.8611 | 0.7806 | 0.8100 |
| 1.5240 | 2350 | 2.2504 | 4.2094 | 2.8505 | 0.7840 | 0.8230 |
| 1.5564 | 2400 | 2.2387 | 4.1801 | 2.8624 | 0.7833 | 0.8240 |
| 1.5888 | 2450 | 1.9941 | 4.2167 | 2.8766 | 0.7828 | 0.8240 |
| 1.6213 | 2500 | 2.2409 | 4.2369 | 2.8512 | 0.7840 | 0.8350 |
| 1.6537 | 2550 | 2.2975 | 4.1915 | 2.8641 | 0.7829 | 0.8280 |
| 1.6861 | 2600 | 2.128 | 4.2368 | 2.8507 | 0.7893 | 0.8270 |
| 1.7185 | 2650 | 2.2529 | 4.1549 | 2.8441 | 0.7866 | 0.8350 |
| 1.7510 | 2700 | 2.1911 | 4.2100 | 2.8232 | 0.7876 | 0.8400 |
| 1.7834 | 2750 | 2.1689 | 4.2249 | 2.8173 | 0.7898 | 0.8410 |
| 1.8158 | 2800 | 2.3684 | 4.0823 | 2.8225 | 0.7907 | 0.8410 |
| 1.8482 | 2850 | 2.1958 | 4.1480 | 2.8490 | 0.7918 | 0.8350 |
| 1.8807 | 2900 | 2.1134 | 4.1486 | 2.8600 | 0.7897 | 0.8430 |
| 1.9131 | 2950 | 2.1197 | 4.0934 | 2.8413 | 0.7943 | 0.8440 |
| 1.9455 | 3000 | 2.0836 | 4.1131 | 2.8306 | 0.7933 | 0.8440 |
| 1.9780 | 3050 | 2.222 | 4.0519 | 2.8211 | 0.7913 | 0.8430 |
| 2.0104 | 3100 | 2.1054 | 4.0644 | 2.8385 | 0.7919 | 0.8450 |
| 2.0428 | 3150 | 2.0689 | 4.0449 | 2.8383 | 0.7934 | 0.8450 |
| 2.0752 | 3200 | 2.0874 | 4.0750 | 2.8307 | 0.7945 | 0.8470 |
| 2.1077 | 3250 | 2.1192 | 4.0471 | 2.8275 | 0.7964 | 0.8480 |
| 2.1401 | 3300 | 2.275 | 4.0727 | 2.8249 | 0.7950 | 0.8480 |
| 2.1725 | 3350 | 1.9172 | 4.0797 | 2.8202 | 0.7944 | 0.8480 |
| 2.2049 | 3400 | 2.0652 | 4.0259 | 2.8226 | 0.7954 | 0.8460 |
| 2.2374 | 3450 | 2.0888 | 4.0551 | 2.8195 | 0.7957 | 0.8440 |
| 2.2698 | 3500 | 1.97 | 4.0646 | 2.8181 | 0.7946 | 0.8470 |
| 2.3022 | 3550 | 1.9869 | 4.0582 | 2.8220 | 0.7954 | 0.8440 |
| 2.3346 | 3600 | 2.009 | 4.0612 | 2.8190 | 0.7938 | 0.8470 |
| 2.3671 | 3650 | 1.9372 | 4.0352 | 2.8161 | 0.7965 | 0.8470 |
| 2.3995 | 3700 | 2.0278 | 4.0567 | 2.8152 | 0.7947 | 0.8490 |
| 2.4319 | 3750 | 2.1537 | 4.0557 | 2.8050 | 0.7958 | 0.8520 |
| 2.4643 | 3800 | 1.9284 | 4.0474 | 2.8075 | 0.7953 | 0.8490 |
| 2.4968 | 3850 | 2.1835 | 4.0568 | 2.8134 | 0.7948 | 0.8470 |
| 2.5292 | 3900 | 2.1061 | 4.0570 | 2.8171 | 0.7953 | 0.8480 |
| 2.5616 | 3950 | 1.9715 | 4.0613 | 2.8197 | 0.7967 | 0.8460 |
| 2.5940 | 4000 | 1.9469 | 4.0501 | 2.8145 | 0.7954 | 0.8470 |
| 2.6265 | 4050 | 2.1233 | 4.0725 | 2.8151 | 0.7960 | 0.8490 |
| 2.6589 | 4100 | 1.9519 | 4.0427 | 2.8153 | 0.7953 | 0.8480 |
| 2.6913 | 4150 | 2.0981 | 4.0457 | 2.8098 | 0.7962 | 0.8490 |
| 2.7237 | 4200 | 2.0842 | 4.0475 | 2.8101 | 0.7975 | 0.8480 |
| 2.7562 | 4250 | 2.0576 | 4.0471 | 2.8077 | 0.7951 | 0.8460 |
| 2.7886 | 4300 | 1.9352 | 4.0529 | 2.8015 | 0.7964 | 0.8490 |
| 2.8210 | 4350 | 2.0641 | 4.0443 | 2.8028 | 0.7964 | 0.8480 |
| 2.8534 | 4400 | 1.9967 | 4.0272 | 2.8071 | 0.7967 | 0.8480 |
| 2.8859 | 4450 | 1.9073 | 4.0360 | 2.8078 | 0.7968 | 0.8480 |
| 2.9183 | 4500 | 2.0812 | 4.0231 | 2.8121 | 0.7969 | 0.8490 |
| 2.9507 | 4550 | 2.0154 | 4.0286 | 2.8167 | 0.7962 | 0.8460 |
| 2.9831 | 4600 | 1.937 | 4.0263 | 2.8110 | 0.7971 | 0.8490 |
| 3.0156 | 4650 | 2.3463 | 4.0144 | 2.8127 | 0.7989 | 0.8480 |
| 3.0480 | 4700 | 1.9581 | 4.0258 | 2.8065 | 0.7975 | 0.8500 |
| 3.0804 | 4750 | 1.8761 | 4.0084 | 2.8003 | 0.7986 | 0.8540 |
| 3.1128 | 4800 | 2.0824 | 4.0026 | 2.8058 | 0.7991 | 0.8520 |
| 3.1453 | 4850 | 1.9138 | 4.0126 | 2.8069 | 0.7990 | 0.8510 |
| 3.1777 | 4900 | 1.9786 | 4.0166 | 2.8096 | 0.7978 | 0.8510 |
| 3.2101 | 4950 | 2.095 | 4.0061 | 2.8040 | 0.7977 | 0.8530 |
| 3.2425 | 5000 | 2.0427 | 4.0114 | 2.8022 | 0.7978 | 0.8530 |
| 3.2750 | 5050 | 2.0037 | 4.0060 | 2.7975 | 0.7984 | 0.8540 |
| 3.3074 | 5100 | 2.0026 | 4.0307 | 2.7966 | 0.7989 | 0.8550 |
| 3.3398 | 5150 | 1.921 | 4.0233 | 2.7963 | 0.7984 | 0.8560 |
| 3.3722 | 5200 | 2.0058 | 4.0238 | 2.7918 | 0.7980 | 0.8550 |
| 3.4047 | 5250 | 2.145 | 4.0257 | 2.7946 | 0.7989 | 0.8530 |
| 3.4371 | 5300 | 2.0656 | 4.0218 | 2.7893 | 0.7995 | 0.8560 |
| 3.4695 | 5350 | 2.1 | 4.0394 | 2.7865 | 0.7994 | 0.8560 |
| 3.5019 | 5400 | 1.9337 | 4.0328 | 2.7898 | 0.7991 | 0.8550 |
| 3.5344 | 5450 | 2.13 | 4.0326 | 2.7867 | 0.7990 | 0.8560 |
| 3.5668 | 5500 | 2.2024 | 4.0158 | 2.7886 | 0.8002 | 0.8530 |
| 3.5992 | 5550 | 1.9329 | 4.0117 | 2.7890 | 0.8001 | 0.8540 |
| 3.6316 | 5600 | 1.7992 | 4.0143 | 2.7907 | 0.8000 | 0.8540 |
| 3.6641 | 5650 | 1.7773 | 4.0062 | 2.7875 | 0.7992 | 0.8530 |
| 3.6965 | 5700 | 1.9803 | 4.0102 | 2.7840 | 0.7986 | 0.8540 |
| 3.7289 | 5750 | 2.1155 | 4.0132 | 2.7829 | 0.7995 | 0.8540 |
| 3.7613 | 5800 | 2.1141 | 4.0078 | 2.7849 | 0.7997 | 0.8540 |
| 3.7938 | 5850 | 2.1378 | 3.9935 | 2.7874 | 0.8004 | 0.8540 |
| 3.8262 | 5900 | 1.9946 | 4.0002 | 2.7876 | 0.7995 | 0.8540 |
| 3.8586 | 5950 | 2.0836 | 4.0128 | 2.7890 | 0.7989 | 0.8540 |
| 3.8911 | 6000 | 1.7896 | 4.0140 | 2.7859 | 0.7991 | 0.8540 |
| 3.9235 | 6050 | 1.9789 | 4.0146 | 2.7860 | 0.7990 | 0.8530 |
| 3.9559 | 6100 | 2.0113 | 4.0148 | 2.7860 | 0.7997 | 0.8530 |
| 3.9883 | 6150 | 2.0277 | 4.0134 | 2.7859 | 0.7998 | 0.8530 |
</details>
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0.dev0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.9.0
- Datasets: 2.19.1
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
chansung/Qwen2.5-Coder-7B-CCRL-CUR-VAR-ASCE-NORMAL-1E
|
chansung
| 2025-08-18T16:47:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:chansung/verifiable-coding-problems-python-v2",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T17:17:28Z |
---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
datasets: chansung/verifiable-coding-problems-python-v2
library_name: transformers
model_name: Qwen2.5-Coder-7B-CCRL-CUR-VAR-ASCE-NORMAL-1E
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-Coder-7B-CCRL-CUR-VAR-ASCE-NORMAL-1E
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the [chansung/verifiable-coding-problems-python-v2](https://huggingface.co/datasets/chansung/verifiable-coding-problems-python-v2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chansung/Qwen2.5-Coder-7B-CCRL-CUR-VAR-ASCE-NORMAL-1E", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chansung18/huggingface/runs/ebv7vaxu)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tevykuch/finbert_ir
|
tevykuch
| 2025-08-18T16:44:26Z | 0 | 0 |
transformers
|
[
"transformers",
"joblib",
"safetensors",
"generated_from_trainer",
"base_model:yiyanghkust/finbert-tone",
"base_model:finetune:yiyanghkust/finbert-tone",
"endpoints_compatible",
"region:us"
] | null | 2025-07-23T14:30:26Z |
---
library_name: transformers
base_model: yiyanghkust/finbert-tone
tags:
- generated_from_trainer
model-index:
- name: finbert_ir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finbert_ir
This model is a fine-tuned version of [yiyanghkust/finbert-tone](https://huggingface.co/yiyanghkust/finbert-tone) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 58
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5517 | 1.0 | 59 | 3.4447 |
| 2.9713 | 2.0 | 118 | 2.8364 |
| 2.6364 | 3.0 | 177 | 2.5676 |
| 2.484 | 4.0 | 236 | 2.4486 |
| 2.4168 | 5.0 | 295 | 2.3868 |
| 2.3729 | 6.0 | 354 | 2.3547 |
| 2.3719 | 7.0 | 413 | 2.3387 |
| 2.3394 | 8.0 | 472 | 2.3285 |
| 2.3765 | 9.0 | 531 | 2.3238 |
| 2.3521 | 10.0 | 590 | 2.3222 |
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Xenova/UAE-Large-V1
|
Xenova
| 2025-08-18T16:14:35Z | 338 | 2 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"base_model:WhereIsAI/UAE-Large-V1",
"base_model:quantized:WhereIsAI/UAE-Large-V1",
"region:us"
] |
feature-extraction
| 2023-12-11T15:12:18Z |
---
base_model: WhereIsAI/UAE-Large-V1
library_name: transformers.js
---
https://huggingface.co/WhereIsAI/UAE-Large-V1 with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
You can then use the model to compute embeddings like this:
```js
import { pipeline } from '@huggingface/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/UAE-Large-V1', {
dtype: "fp32" // Options: "fp32", "fp16", "q8", "q4"
});
// Compute sentence embeddings
const sentences = ['That is a happy person', 'That is a very happy person'];
const output = await extractor(sentences, { pooling: 'cls' });
console.log(output);
// Tensor {
// dims: [ 2, 1024 ],
// type: 'float32',
// data: Float32Array(2048) [ -0.1308155655860901, 0.44334232807159424, ... ],
// size: 2048
// }
```
Compute cosine similarity between the two sentences:
```js
import { cos_sim } from '@huggingface/transformers';
console.log(cos_sim(output[0].data, output[1].data))
// 0.9586893906734091
```
You can convert the `output` Tensor to a nested JavaScript array using `.tolist()`:
```js
console.log(output.tolist());
// [
// [ -0.1308155655860901, 0.44334232807159424, -0.12212765961885452, ... ],
// [ 0.03931744396686554, 0.30553528666496277, -0.19462820887565613, ... ]
// ]
```
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
andrewelawrence/AMwithLLMs-Meta-Llama-3.1-8B-Instruct-bnb-4bit
|
andrewelawrence
| 2025-08-18T16:05:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:other",
"region:us"
] | null | 2025-08-14T21:00:44Z |
---
library_name: peft
license: other
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: AMwithLLMs-Meta-Llama-3.1-8B-Instruct-bnb-4bit
results: []
---
# AMwithLLMs-Meta-Llama-3.1-8B-Instruct-bnb-4bit
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit) on the Persuasive Essays (PE), Cornell eRulemaking Corpus (CDCP), and Abstracts of Randomized Control Trials (AbstRCT) datasets. It implements the fine-tuning process as described in [Argument Mining with Fine-Tuned Large Language Models](https://aclanthology.org/2025.coling-main.442/) (Cabessa et al., COLING 2025) and availabile at [https://github.com/mohammadoumar/AMwithLLMs](https://github.com/mohammadoumar/AMwithLLMs).
### Citation
```
@inproceedings{cabessa-etal-2025-argument,
author = "Cabessa, Jeremie and Hernault, Hugo and Mushtaq, Umer",
title = "Argument Mining with Fine-Tuned Large Language Models",
publisher = "Association for Computational Linguistics",
booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
editor = "Rambow, Owen and Wanner, Leo and Apidianaki, Marianna and Al-Khalifa, Hend and Eugenio, Barbara Di and Schockaert, Steven",
month = jan,
year = "2025",
address = "Abu Dhabi, UAE",
url = "https://aclanthology.org/2025.coling-main.442/",
pages = "6624--6635",
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
EsthefanoMC23/blip-captioning-base-personal
|
EsthefanoMC23
| 2025-08-18T15:48:44Z | 0 | 0 | null |
[
"pytorch",
"tf",
"blip",
"image-captioning",
"image-to-text",
"arxiv:2201.12086",
"license:bsd-3-clause",
"region:us"
] |
image-to-text
| 2025-08-18T00:04:28Z |
---
pipeline_tag: image-to-text
tags:
- image-captioning
languages:
- en
license: bsd-3-clause
---
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone).
|  |
|:--:|
| <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>|
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract:
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
## Usage
You can use this model for conditional and un-conditional image captioning
### Using the Pytorch model
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
```
</details>
## Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
## BibTex and citation info
```
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
paperboygold/gpt-oss-sanguine-20b-q5-gguf
|
paperboygold
| 2025-08-18T15:43:45Z | 0 | 0 | null |
[
"gguf",
"quantized",
"llama-cpp",
"gpt-oss",
"roleplay",
"base_model:paperboygold/gpt-oss-sanguine-20b-v1",
"base_model:quantized:paperboygold/gpt-oss-sanguine-20b-v1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T15:37:03Z |
---
license: mit
base_model: paperboygold/gpt-oss-sanguine-20b-v1
tags:
- gguf
- quantized
- llama-cpp
- gpt-oss
- roleplay
---
# Sanguine Scribe Q5 GGUF
This is a quantized version of [gpt-oss-sanguine-20b-v1](https://huggingface.co/paperboygold/gpt-oss-sanguine-20b-v1), a consequence-based alignment model for character roleplay.
- **File size**: 15.73 GB
- **Quantization**: Q5
- **Format**: GGUF (llama.cpp compatible)
## Usage with llama.cpp
```bash
# Download the model
huggingface-cli download paperboygold/sanguine-scribe-q5-gguf sanguine_scribe_q5_k_m_20250818_140934.gguf --local-dir ./
# Run with llama.cpp
./llama-cli -m sanguine_scribe_q5_k_m_20250818_140934.gguf -p "You are a tavern keeper. A hooded stranger approaches."
```
## Original Model Details
Sanguine Scribe implements consequence-based alignment for character roleplay:
- **Base Model**: openai/gpt-oss-20b
- **Training Dataset**: 350,969 examples from 40+ sources
- **Training Loss**: 4.1 → 1.31 over 500 steps
- **Approach**: Realistic consequences instead of refusal responses
Perfect for immersive character roleplay and interactive storytelling.
|
paperboygold/gpt-oss-sanguine-20b-8bit-bnb
|
paperboygold
| 2025-08-18T15:42:40Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"quantized",
"gpt-oss",
"roleplay",
"consequence-based-alignment",
"base_model:paperboygold/gpt-oss-sanguine-20b-v1",
"base_model:quantized:paperboygold/gpt-oss-sanguine-20b-v1",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-08-18T14:56:29Z |
---
license: mit
base_model: paperboygold/gpt-oss-sanguine-20b-v1
tags:
- quantized
- gpt-oss
- roleplay
- consequence-based-alignment
---
# sanguine-scribe-8bit-bnb
8-bit quantized version using BitsAndBytes for balanced quality and efficiency.
This is a quantized version of [gpt-oss-sanguine-20b-v1](https://huggingface.co/paperboygold/gpt-oss-sanguine-20b-v1), a consequence-based alignment model for character roleplay.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("paperboygold/sanguine-scribe-8bit-bnb")
model = AutoModelForCausalLM.from_pretrained(
"paperboygold/sanguine-scribe-8bit-bnb",
device_map="auto",
trust_remote_code=True
)
```
## Original Model
- **Base Model**: openai/gpt-oss-20b
- **Training Dataset**: [sanguine-dataset-v1](https://huggingface.co/datasets/paperboygold/sanguine-dataset-v1) (350K examples)
- **Training Loss**: 4.1 → 1.31 (500 steps)
|
donoway/GSM8K-Binary_Llama-3.2-1B-8kwse8de
|
donoway
| 2025-08-18T15:04:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T14:13:26Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: GSM8K-Binary_Llama-3.2-1B-8kwse8de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GSM8K-Binary_Llama-3.2-1B-8kwse8de
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4787
- Model Preparation Time: 0.0059
- Mdl: 5279.8389
- Accumulated Loss: 3659.7055
- Correct Preds: 1822.0
- Total Preds: 2475.0
- Accuracy: 0.7362
- Correct Gen Preds: 1743.0
- Gen Accuracy: 0.7042
- Correct Gen Preds 34192: 834.0
- Correct Preds 34192: 870.0
- Total Labels 34192: 1196.0
- Accuracy 34192: 0.7274
- Gen Accuracy 34192: 0.6973
- Correct Gen Preds 41568: 900.0
- Correct Preds 41568: 952.0
- Total Labels 41568: 1267.0
- Accuracy 41568: 0.7514
- Gen Accuracy 41568: 0.7103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 34192 | Correct Preds 34192 | Total Labels 34192 | Accuracy 34192 | Gen Accuracy 34192 | Correct Gen Preds 41568 | Correct Preds 41568 | Total Labels 41568 | Accuracy 41568 | Gen Accuracy 41568 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|
| No log | 0 | 0 | 1.4656 | 0.0059 | 5233.1723 | 3627.3586 | 1196.0 | 2475.0 | 0.4832 | 1204.0 | 0.4865 | 1196.0 | 1196.0 | 1196.0 | 1.0 | 1.0 | 0.0 | 0.0 | 1267.0 | 0.0 | 0.0 |
| 0.7404 | 1.0 | 5 | 0.7519 | 0.0059 | 2684.8085 | 1860.9674 | 1301.0 | 2475.0 | 0.5257 | 9.0 | 0.0036 | 0.0 | 97.0 | 1196.0 | 0.0811 | 0.0 | 1.0 | 1204.0 | 1267.0 | 0.9503 | 0.0008 |
| 1.4345 | 2.0 | 10 | 0.6475 | 0.0059 | 2312.1054 | 1602.6293 | 1678.0 | 2475.0 | 0.6780 | 8.0 | 0.0032 | 0.0 | 926.0 | 1196.0 | 0.7742 | 0.0 | 0.0 | 752.0 | 1267.0 | 0.5935 | 0.0 |
| 0.3056 | 3.0 | 15 | 0.6249 | 0.0059 | 2231.2720 | 1546.5999 | 1767.0 | 2475.0 | 0.7139 | 9.0 | 0.0036 | 1.0 | 906.0 | 1196.0 | 0.7575 | 0.0008 | 0.0 | 861.0 | 1267.0 | 0.6796 | 0.0 |
| 0.3324 | 4.0 | 20 | 0.6716 | 0.0059 | 2398.2346 | 1662.3295 | 1794.0 | 2475.0 | 0.7248 | 125.0 | 0.0505 | 9.0 | 831.0 | 1196.0 | 0.6948 | 0.0075 | 108.0 | 963.0 | 1267.0 | 0.7601 | 0.0852 |
| 0.7534 | 5.0 | 25 | 1.2676 | 0.0059 | 4526.0621 | 3137.2272 | 1499.0 | 2475.0 | 0.6057 | 932.0 | 0.3766 | 82.0 | 267.0 | 1196.0 | 0.2232 | 0.0686 | 842.0 | 1232.0 | 1267.0 | 0.9724 | 0.6646 |
| 0.2081 | 6.0 | 30 | 1.5980 | 0.0059 | 5705.7968 | 3954.9570 | 1505.0 | 2475.0 | 0.6081 | 703.0 | 0.2840 | 618.0 | 1175.0 | 1196.0 | 0.9824 | 0.5167 | 77.0 | 330.0 | 1267.0 | 0.2605 | 0.0608 |
| 0.082 | 7.0 | 35 | 1.1486 | 0.0059 | 4101.3733 | 2842.8553 | 1612.0 | 2475.0 | 0.6513 | 992.0 | 0.4008 | 120.0 | 449.0 | 1196.0 | 0.3754 | 0.1003 | 863.0 | 1163.0 | 1267.0 | 0.9179 | 0.6811 |
| 0.6616 | 8.0 | 40 | 1.2311 | 0.0059 | 4395.8751 | 3046.9884 | 1779.0 | 2475.0 | 0.7188 | 1492.0 | 0.6028 | 826.0 | 1015.0 | 1196.0 | 0.8487 | 0.6906 | 657.0 | 764.0 | 1267.0 | 0.6030 | 0.5185 |
| 0.0017 | 9.0 | 45 | 1.6432 | 0.0059 | 5867.3174 | 4066.9145 | 1756.0 | 2475.0 | 0.7095 | 1610.0 | 0.6505 | 923.0 | 1023.0 | 1196.0 | 0.8554 | 0.7717 | 678.0 | 733.0 | 1267.0 | 0.5785 | 0.5351 |
| 0.0001 | 10.0 | 50 | 2.1381 | 0.0059 | 7634.3190 | 5291.7067 | 1718.0 | 2475.0 | 0.6941 | 1546.0 | 0.6246 | 983.0 | 1082.0 | 1196.0 | 0.9047 | 0.8219 | 554.0 | 636.0 | 1267.0 | 0.5020 | 0.4373 |
| 0.0001 | 11.0 | 55 | 1.4472 | 0.0059 | 5167.3448 | 3581.7305 | 1813.0 | 2475.0 | 0.7325 | 1610.0 | 0.6505 | 792.0 | 898.0 | 1196.0 | 0.7508 | 0.6622 | 809.0 | 915.0 | 1267.0 | 0.7222 | 0.6385 |
| 0.0 | 12.0 | 60 | 1.4471 | 0.0059 | 5167.1333 | 3581.5839 | 1815.0 | 2475.0 | 0.7333 | 1670.0 | 0.6747 | 770.0 | 835.0 | 1196.0 | 0.6982 | 0.6438 | 891.0 | 980.0 | 1267.0 | 0.7735 | 0.7032 |
| 0.0 | 13.0 | 65 | 1.4645 | 0.0059 | 5229.3257 | 3624.6924 | 1820.0 | 2475.0 | 0.7354 | 1726.0 | 0.6974 | 812.0 | 852.0 | 1196.0 | 0.7124 | 0.6789 | 905.0 | 968.0 | 1267.0 | 0.7640 | 0.7143 |
| 1.3069 | 14.0 | 70 | 1.4787 | 0.0059 | 5279.8389 | 3659.7055 | 1822.0 | 2475.0 | 0.7362 | 1743.0 | 0.7042 | 834.0 | 870.0 | 1196.0 | 0.7274 | 0.6973 | 900.0 | 952.0 | 1267.0 | 0.7514 | 0.7103 |
| 0.6534 | 15.0 | 75 | 1.4931 | 0.0059 | 5331.4229 | 3695.4608 | 1820.0 | 2475.0 | 0.7354 | 1757.0 | 0.7099 | 859.0 | 888.0 | 1196.0 | 0.7425 | 0.7182 | 889.0 | 932.0 | 1267.0 | 0.7356 | 0.7017 |
| 0.6535 | 16.0 | 80 | 1.5030 | 0.0059 | 5366.7260 | 3719.9310 | 1818.0 | 2475.0 | 0.7345 | 1766.0 | 0.7135 | 869.0 | 893.0 | 1196.0 | 0.7467 | 0.7266 | 888.0 | 925.0 | 1267.0 | 0.7301 | 0.7009 |
| 0.0 | 17.0 | 85 | 1.5122 | 0.0059 | 5399.3942 | 3742.5749 | 1820.0 | 2475.0 | 0.7354 | 1767.0 | 0.7139 | 874.0 | 898.0 | 1196.0 | 0.7508 | 0.7308 | 884.0 | 922.0 | 1267.0 | 0.7277 | 0.6977 |
| 0.0 | 18.0 | 90 | 1.5168 | 0.0059 | 5415.9772 | 3754.0693 | 1822.0 | 2475.0 | 0.7362 | 1772.0 | 0.7160 | 879.0 | 902.0 | 1196.0 | 0.7542 | 0.7349 | 884.0 | 920.0 | 1267.0 | 0.7261 | 0.6977 |
| 0.0 | 19.0 | 95 | 1.5232 | 0.0059 | 5438.9175 | 3769.9703 | 1822.0 | 2475.0 | 0.7362 | 1774.0 | 0.7168 | 881.0 | 903.0 | 1196.0 | 0.7550 | 0.7366 | 884.0 | 919.0 | 1267.0 | 0.7253 | 0.6977 |
| 0.0 | 20.0 | 100 | 1.5241 | 0.0059 | 5442.2286 | 3772.2654 | 1819.0 | 2475.0 | 0.7349 | 1771.0 | 0.7156 | 884.0 | 905.0 | 1196.0 | 0.7567 | 0.7391 | 878.0 | 914.0 | 1267.0 | 0.7214 | 0.6930 |
| 0.0 | 21.0 | 105 | 1.5278 | 0.0059 | 5455.2160 | 3781.2676 | 1821.0 | 2475.0 | 0.7358 | 1778.0 | 0.7184 | 884.0 | 905.0 | 1196.0 | 0.7567 | 0.7391 | 885.0 | 916.0 | 1267.0 | 0.7230 | 0.6985 |
| 0.6535 | 22.0 | 110 | 1.5296 | 0.0059 | 5461.6471 | 3785.7253 | 1819.0 | 2475.0 | 0.7349 | 1776.0 | 0.7176 | 887.0 | 907.0 | 1196.0 | 0.7584 | 0.7416 | 880.0 | 912.0 | 1267.0 | 0.7198 | 0.6946 |
| 0.0 | 23.0 | 115 | 1.5328 | 0.0059 | 5473.0012 | 3793.5954 | 1821.0 | 2475.0 | 0.7358 | 1782.0 | 0.72 | 888.0 | 907.0 | 1196.0 | 0.7584 | 0.7425 | 885.0 | 914.0 | 1267.0 | 0.7214 | 0.6985 |
| 0.0 | 24.0 | 120 | 1.5339 | 0.0059 | 5477.0890 | 3796.4288 | 1821.0 | 2475.0 | 0.7358 | 1778.0 | 0.7184 | 889.0 | 910.0 | 1196.0 | 0.7609 | 0.7433 | 880.0 | 911.0 | 1267.0 | 0.7190 | 0.6946 |
| 1.3069 | 25.0 | 125 | 1.5357 | 0.0059 | 5483.3601 | 3800.7756 | 1818.0 | 2475.0 | 0.7345 | 1777.0 | 0.7180 | 886.0 | 907.0 | 1196.0 | 0.7584 | 0.7408 | 882.0 | 911.0 | 1267.0 | 0.7190 | 0.6961 |
| 0.0 | 26.0 | 130 | 1.5390 | 0.0059 | 5495.1006 | 3808.9135 | 1820.0 | 2475.0 | 0.7354 | 1779.0 | 0.7188 | 888.0 | 909.0 | 1196.0 | 0.7600 | 0.7425 | 882.0 | 911.0 | 1267.0 | 0.7190 | 0.6961 |
| 0.6534 | 27.0 | 135 | 1.5373 | 0.0059 | 5489.3342 | 3804.9165 | 1820.0 | 2475.0 | 0.7354 | 1782.0 | 0.72 | 889.0 | 908.0 | 1196.0 | 0.7592 | 0.7433 | 884.0 | 912.0 | 1267.0 | 0.7198 | 0.6977 |
| 0.0 | 28.0 | 140 | 1.5419 | 0.0059 | 5505.6494 | 3816.2253 | 1822.0 | 2475.0 | 0.7362 | 1780.0 | 0.7192 | 890.0 | 911.0 | 1196.0 | 0.7617 | 0.7441 | 881.0 | 911.0 | 1267.0 | 0.7190 | 0.6953 |
| 0.0 | 29.0 | 145 | 1.5433 | 0.0059 | 5510.5924 | 3819.6516 | 1821.0 | 2475.0 | 0.7358 | 1779.0 | 0.7188 | 889.0 | 910.0 | 1196.0 | 0.7609 | 0.7433 | 881.0 | 911.0 | 1267.0 | 0.7190 | 0.6953 |
| 0.0 | 30.0 | 150 | 1.5439 | 0.0059 | 5512.6644 | 3821.0878 | 1819.0 | 2475.0 | 0.7349 | 1777.0 | 0.7180 | 889.0 | 909.0 | 1196.0 | 0.7600 | 0.7433 | 879.0 | 910.0 | 1267.0 | 0.7182 | 0.6938 |
| 0.0 | 31.0 | 155 | 1.5443 | 0.0059 | 5514.1591 | 3822.1238 | 1820.0 | 2475.0 | 0.7354 | 1781.0 | 0.7196 | 890.0 | 911.0 | 1196.0 | 0.7617 | 0.7441 | 882.0 | 909.0 | 1267.0 | 0.7174 | 0.6961 |
| 0.6534 | 32.0 | 160 | 1.5471 | 0.0059 | 5524.2001 | 3829.0837 | 1820.0 | 2475.0 | 0.7354 | 1776.0 | 0.7176 | 891.0 | 912.0 | 1196.0 | 0.7625 | 0.7450 | 876.0 | 908.0 | 1267.0 | 0.7167 | 0.6914 |
| 0.0 | 33.0 | 165 | 1.5472 | 0.0059 | 5524.7178 | 3829.4426 | 1821.0 | 2475.0 | 0.7358 | 1778.0 | 0.7184 | 891.0 | 912.0 | 1196.0 | 0.7625 | 0.7450 | 878.0 | 909.0 | 1267.0 | 0.7174 | 0.6930 |
| 0.0 | 34.0 | 170 | 1.5496 | 0.0059 | 5533.2649 | 3835.3670 | 1817.0 | 2475.0 | 0.7341 | 1777.0 | 0.7180 | 890.0 | 911.0 | 1196.0 | 0.7617 | 0.7441 | 878.0 | 906.0 | 1267.0 | 0.7151 | 0.6930 |
| 0.0 | 35.0 | 175 | 1.5519 | 0.0059 | 5541.3527 | 3840.9730 | 1820.0 | 2475.0 | 0.7354 | 1780.0 | 0.7192 | 890.0 | 910.0 | 1196.0 | 0.7609 | 0.7441 | 881.0 | 910.0 | 1267.0 | 0.7182 | 0.6953 |
| 0.0 | 36.0 | 180 | 1.5514 | 0.0059 | 5539.5094 | 3839.6954 | 1820.0 | 2475.0 | 0.7354 | 1781.0 | 0.7196 | 891.0 | 912.0 | 1196.0 | 0.7625 | 0.7450 | 881.0 | 908.0 | 1267.0 | 0.7167 | 0.6953 |
| 0.0 | 37.0 | 185 | 1.5539 | 0.0059 | 5548.5974 | 3845.9946 | 1819.0 | 2475.0 | 0.7349 | 1780.0 | 0.7192 | 891.0 | 912.0 | 1196.0 | 0.7625 | 0.7450 | 880.0 | 907.0 | 1267.0 | 0.7159 | 0.6946 |
| 0.0 | 38.0 | 190 | 1.5534 | 0.0059 | 5546.8413 | 3844.7774 | 1819.0 | 2475.0 | 0.7349 | 1781.0 | 0.7196 | 892.0 | 912.0 | 1196.0 | 0.7625 | 0.7458 | 880.0 | 907.0 | 1267.0 | 0.7159 | 0.6946 |
| 0.0 | 39.0 | 195 | 1.5541 | 0.0059 | 5549.1300 | 3846.3638 | 1820.0 | 2475.0 | 0.7354 | 1781.0 | 0.7196 | 892.0 | 912.0 | 1196.0 | 0.7625 | 0.7458 | 880.0 | 908.0 | 1267.0 | 0.7167 | 0.6946 |
| 0.0 | 40.0 | 200 | 1.5561 | 0.0059 | 5556.3793 | 3851.3886 | 1821.0 | 2475.0 | 0.7358 | 1785.0 | 0.7212 | 894.0 | 914.0 | 1196.0 | 0.7642 | 0.7475 | 882.0 | 907.0 | 1267.0 | 0.7159 | 0.6961 |
| 0.6534 | 41.0 | 205 | 1.5581 | 0.0059 | 5563.5837 | 3856.3823 | 1815.0 | 2475.0 | 0.7333 | 1778.0 | 0.7184 | 891.0 | 910.0 | 1196.0 | 0.7609 | 0.7450 | 878.0 | 905.0 | 1267.0 | 0.7143 | 0.6930 |
| 0.6534 | 42.0 | 210 | 1.5582 | 0.0059 | 5563.8211 | 3856.5469 | 1819.0 | 2475.0 | 0.7349 | 1784.0 | 0.7208 | 893.0 | 913.0 | 1196.0 | 0.7634 | 0.7467 | 882.0 | 906.0 | 1267.0 | 0.7151 | 0.6961 |
| 0.6534 | 43.0 | 215 | 1.5591 | 0.0059 | 5566.9433 | 3858.7111 | 1819.0 | 2475.0 | 0.7349 | 1784.0 | 0.7208 | 895.0 | 915.0 | 1196.0 | 0.7651 | 0.7483 | 880.0 | 904.0 | 1267.0 | 0.7135 | 0.6946 |
| 0.0 | 44.0 | 220 | 1.5600 | 0.0059 | 5570.2078 | 3860.9738 | 1818.0 | 2475.0 | 0.7345 | 1779.0 | 0.7188 | 893.0 | 913.0 | 1196.0 | 0.7634 | 0.7467 | 877.0 | 905.0 | 1267.0 | 0.7143 | 0.6922 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755526580
|
sampingkaca72
| 2025-08-18T14:42:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:42:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aiface/phobert-large_nli
|
aiface
| 2025-08-18T14:31:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-large",
"base_model:finetune:vinai/phobert-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T11:25:21Z |
---
library_name: transformers
license: mit
base_model: vinai/phobert-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: phobert-large_nli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-large_nli
This model is a fine-tuned version of [vinai/phobert-large](https://huggingface.co/vinai/phobert-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3062
- Accuracy: 0.8102
- Precision Macro: 0.8106
- Recall Macro: 0.8103
- F1 Macro: 0.8103
- F1 Weighted: 0.8103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-----------:|
| 1.0976 | 1.0 | 72 | 1.0257 | 0.5237 | 0.5529 | 0.5264 | 0.5082 | 0.5072 |
| 0.9271 | 2.0 | 144 | 0.6649 | 0.7592 | 0.7887 | 0.7579 | 0.7590 | 0.7590 |
| 0.4037 | 3.0 | 216 | 0.5864 | 0.7894 | 0.7930 | 0.7895 | 0.7895 | 0.7895 |
| 0.2866 | 4.0 | 288 | 0.6385 | 0.8120 | 0.8142 | 0.8125 | 0.8118 | 0.8118 |
| 0.1197 | 5.0 | 360 | 0.6949 | 0.8115 | 0.8117 | 0.8115 | 0.8115 | 0.8115 |
| 0.0939 | 6.0 | 432 | 0.7485 | 0.8058 | 0.8084 | 0.8060 | 0.8058 | 0.8059 |
| 0.0647 | 7.0 | 504 | 0.9244 | 0.7920 | 0.7977 | 0.7921 | 0.7919 | 0.7918 |
| 0.0457 | 8.0 | 576 | 0.8464 | 0.8106 | 0.8107 | 0.8107 | 0.8106 | 0.8106 |
| 0.046 | 9.0 | 648 | 0.9886 | 0.8062 | 0.8121 | 0.8066 | 0.8064 | 0.8063 |
| 0.026 | 10.0 | 720 | 0.9887 | 0.8120 | 0.8126 | 0.8121 | 0.8120 | 0.8121 |
| 0.0244 | 11.0 | 792 | 1.0642 | 0.8124 | 0.8130 | 0.8126 | 0.8125 | 0.8125 |
| 0.0211 | 12.0 | 864 | 1.0197 | 0.8075 | 0.8097 | 0.8078 | 0.8077 | 0.8077 |
| 0.0146 | 13.0 | 936 | 1.1487 | 0.8151 | 0.8171 | 0.8155 | 0.8151 | 0.8151 |
| 0.0085 | 14.0 | 1008 | 1.1846 | 0.8053 | 0.8056 | 0.8053 | 0.8053 | 0.8053 |
| 0.0051 | 15.0 | 1080 | 1.2905 | 0.8084 | 0.8095 | 0.8085 | 0.8084 | 0.8084 |
| 0.0036 | 16.0 | 1152 | 1.3259 | 0.8102 | 0.8121 | 0.8104 | 0.8104 | 0.8104 |
| 0.0027 | 17.0 | 1224 | 1.3187 | 0.8115 | 0.8121 | 0.8115 | 0.8116 | 0.8116 |
| 0.0023 | 18.0 | 1296 | 1.3024 | 0.8115 | 0.8120 | 0.8117 | 0.8116 | 0.8116 |
| 0.0025 | 19.0 | 1368 | 1.3049 | 0.8111 | 0.8115 | 0.8112 | 0.8111 | 0.8111 |
| 0.0037 | 20.0 | 1440 | 1.3062 | 0.8102 | 0.8106 | 0.8103 | 0.8103 | 0.8103 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755524967
|
helmutsukocok
| 2025-08-18T14:15:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:15:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755524309
|
indoempatnol
| 2025-08-18T14:08:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:08:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abcorrea/p2-v5
|
abcorrea
| 2025-08-18T14:03:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:abcorrea/p2-v4",
"base_model:finetune:abcorrea/p2-v4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:31:38Z |
---
base_model: abcorrea/p2-v4
library_name: transformers
model_name: p2-v5
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for p2-v5
This model is a fine-tuned version of [abcorrea/p2-v4](https://huggingface.co/abcorrea/p2-v4).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="abcorrea/p2-v5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.52.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
drush8/Qwen3-1.7B-INT4
|
drush8
| 2025-08-18T14:02:21Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
] |
text-generation
| 2025-08-18T14:02:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhai-lw/L3AC
|
zhai-lw
| 2025-08-18T13:46:08Z | 0 | 0 |
l3ac
|
[
"l3ac",
"audio-to-audio",
"arxiv:2504.04949",
"region:us"
] |
audio-to-audio
| 2025-08-15T11:27:35Z |
---
pipeline_tag: audio-to-audio
library_name: l3ac
---
# L3AC: Towards a Lightweight and Lossless Audio Codec
This repository contains the implementation of L3AC, a lightweight neural audio codec introduced in the paper titled "[L3AC: Towards a Lightweight and Lossless Audio Codec](https://huggingface.co/papers/2504.04949)".
Neural audio codecs have recently gained traction for their ability to compress high-fidelity audio and provide discrete tokens for generative modeling. However, leading approaches often rely on resource-intensive models and complex multi-quantizer architectures, limiting their practicality in real-world applications. In this work, we introduce L3AC, a lightweight neural audio codec that addresses these challenges by leveraging a single quantizer and a highly efficient architecture. To enhance reconstruction fidelity while minimizing model complexity, L3AC explores streamlined convolutional networks and local Transformer modules, alongside TConv--a novel structure designed to capture acoustic variations across multiple temporal scales. Despite its compact design, extensive experiments across diverse datasets demonstrate that L3AC matches or exceeds the reconstruction quality of leading codecs while reducing computational overhead by an order of magnitude. The single-quantizer design further enhances its adaptability for downstream tasks.
<figure class="image">
<img src="https://github.com/zhai-lw/L3AC/raw/main/bubble_chart.svg" alt="Comparison of various audio codec">
<figcaption>Comparison of various audio codec</figcaption>
</figure>
**Paper:** [L3AC: Towards a Lightweight and Lossless Audio Codec](https://huggingface.co/papers/2504.04949)
**Official GitHub Repository:** [https://github.com/zhai-lw/L3AC](https://github.com/zhai-lw/L3AC)
## Installation
You can install the `l3ac` library using pip:
```bash
pip install l3ac
```
### Demo
Firstly, make sure you have installed the `librosa` package to load the example audio file. You can install it using pip:
```bash
pip install librosa
```
Then, you can use the following code to load a sample audio file, encode it using the L3AC model, and decode it back to audio. The code also calculates the mean squared error (MSE) between the original and generated audio.
```python
import librosa
import torch
import l3ac
all_models = l3ac.list_models()
print(f"Available models: {all_models}")
MODEL_USED = '1kbps'
codec = l3ac.get_model(MODEL_USED)
print(f"loaded codec({MODEL_USED}) and codec sample rate: {codec.config.sample_rate}")
sample_audio, sample_rate = librosa.load(librosa.example("libri1"))
sample_audio = sample_audio[None, :]
print(f"loaded sample audio and audio sample_rate :{sample_rate}")
sample_audio = librosa.resample(sample_audio, orig_sr=sample_rate, target_sr=codec.config.sample_rate)
codec.network.cuda()
codec.network.eval()
with torch.inference_mode():
audio_in = torch.tensor(sample_audio, dtype=torch.float32, device='cuda')
_, audio_length = audio_in.shape
print(f"{audio_in.shape=}")
q_feature, indices = codec.encode_audio(audio_in)
audio_out = codec.decode_audio(q_feature) # or
# audio_out = codec.decode_audio(indices=indices['indices'])
generated_audio = audio_out[:, :audio_length].detach().cpu().numpy()
mse = ((sample_audio - generated_audio) ** 2).mean().item()
print(f"codec({MODEL_USED}) mse: {mse}")
```
### Available Models
| config_name | Sample rate(Hz) | tokens/s | Codebook size | Bitrate(bps) |
|-------------|-----------------|----------|---------------|--------------|
| 0k75bps | 16,000 | 44.44 | 117,649 | 748.6 |
| 1kbps | 16,000 | 59.26 | 117,649 | 998.2 |
| 1k5bps | 16,000 | 88.89 | 117,649 | 1497.3 |
| 3kbps | 16,000 | 166.67 | 250,047 | 2988.6 |
|
skylord/gemma_270mn_lora_model
|
skylord
| 2025-08-18T13:42:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T13:41:57Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** skylord
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755522616
|
katanyasekolah
| 2025-08-18T13:39:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:39:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/InnoSpark-R-72B-0701-GGUF
|
mradermacher
| 2025-08-18T13:28:26Z | 129 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sii-research/InnoSpark-R-72B-0701",
"base_model:quantized:sii-research/InnoSpark-R-72B-0701",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-22T09:53:04Z |
---
base_model: sii-research/InnoSpark-R-72B-0701
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sii-research/InnoSpark-R-72B-0701
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InnoSpark-R-72B-0701-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755523456
|
Vasya777
| 2025-08-18T13:25:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:24:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MorcuendeA/mercadona_sentiment_product_detection
|
MorcuendeA
| 2025-08-18T11:37:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T11:36:14Z |
---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mercadona_sentiment_product_detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mercadona_sentiment_product_detection
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5101
- Accuracy: 0.8469
- F1 Score: 0.8527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 69
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 0.6048 | 1.4444 | 20 | 0.4691 | 0.8329 | 0.8302 |
| 0.407 | 2.8889 | 40 | 0.3629 | 0.8515 | 0.8483 |
| 0.3003 | 4.2963 | 60 | 0.3482 | 0.8654 | 0.8664 |
| 0.2108 | 5.7407 | 80 | 0.5634 | 0.8167 | 0.8358 |
| 0.1735 | 7.1481 | 100 | 0.5181 | 0.8445 | 0.8534 |
| 0.1195 | 8.5926 | 120 | 0.4866 | 0.8538 | 0.8584 |
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1_
|
neural-interactive-proofs
| 2025-08-18T10:36:31Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T10:35:35Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1_
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1_", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-18_11-10-00_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_0_prover1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Muapi/schoolgirl
|
Muapi
| 2025-08-18T10:33:07Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T10:32:49Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Schoolgirl

**Base model**: Flux.1 D
**Trained words**: Sch00lg1rl, american, african, caribbean, indian, australian, chinese, japanese, korean, polynesian, european, latin american, arabic, hispanic, canadian, mexican, cuban, cheerleder
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:940001@1892401", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755511314
|
quantumxnode
| 2025-08-18T10:26:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T10:26:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BizarreCake/qwen_2.5_7b_bird_rmrf_better_4
|
BizarreCake
| 2025-08-18T09:58:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T09:22:59Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BizarreCake
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
esohecik963/blockassist-bc-long_beaked_ibis_1755509818
|
esohecik963
| 2025-08-18T09:37:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"long beaked ibis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T09:37:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- long beaked ibis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/dollskill-collection-flux
|
Muapi
| 2025-08-18T09:25:47Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T09:25:36Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Dollskill Collection [Flux]

**Base model**: Flux.1 D
**Trained words**: Dollskill, "type": "white t-shirt, oversized", "color": "white", "fit": "sculpted and partially shredded, with wire or plastic structures poking through", "details": "red lettering 'Make Boys Cry', but each letter stretched, slashed, or melted for a distorted look" }, "bottom": { "type": "denim hotpants", "color": "blue with neon paint splatters", "fit": "extremely short, high-cut, uneven leg openings, frayed hem with metallic eyelets and studs"
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:659422@2019754", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
kingabzpro/wav2vec2-large-xls-r-300m-Urdu
|
kingabzpro
| 2025-08-18T09:24:59Z | 52,891 | 14 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ur
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
base_model: facebook/wav2vec2-xls-r-300m
model-index:
- name: wav2vec2-large-xls-r-300m-Urdu
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ur
metrics:
- type: wer
value: 39.89
name: Test WER
- type: cer
value: 16.7
name: Test CER
new_version: kingabzpro/whisper-large-v3-turbo-urdu
pipeline_tag: automatic-speech-recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Urdu
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9889
- Wer: 0.5607
- Cer: 0.2370
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-300m-Urdu --dataset mozilla-foundation/common_voice_8_0 --config ur --split test
```
### Inference With LM
```python
#pip install pyctcdecode kenlm
from datasets import load_dataset, Audio
from transformers import pipeline
model = "kingabzpro/wav2vec2-large-xls-r-300m-Urdu"
data = load_dataset("mozilla-foundation/common_voice_8_0",
"ur",
split="test",
streaming=True,
trust_remote_code=True)
sample_iter = iter(data.cast_column("audio",
Audio(sampling_rate=16_000)))
sample = next(sample_iter)
asr = pipeline("automatic-speech-recognition", model=model)
prediction = asr(sample["audio"]["array"],
chunk_length_s=5,
stride_length_s=1)
prediction
# => {'text': 'مزدور تے کہ علاوہ سرکاری اور کاروباری لوگ ن ڈرپجے کام شروع کرتے'}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 3.6398 | 30.77 | 400 | 3.3517 | 1.0 | 1.0 |
| 2.9225 | 61.54 | 800 | 2.5123 | 1.0 | 0.8310 |
| 1.2568 | 92.31 | 1200 | 0.9699 | 0.6273 | 0.2575 |
| 0.8974 | 123.08 | 1600 | 0.9715 | 0.5888 | 0.2457 |
| 0.7151 | 153.85 | 2000 | 0.9984 | 0.5588 | 0.2353 |
| 0.6416 | 184.62 | 2400 | 0.9889 | 0.5607 | 0.2370 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 52.03 | 39.89 |
|
Muapi/vintage-photo-flux
|
Muapi
| 2025-08-18T09:21:39Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T09:21:27Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Vintage Photo Flux

**Base model**: Flux.1 D
**Trained words**: 35mm B&W vintage street photo of
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:677699@758596", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Vortexjr/max-adpapter
|
Vortexjr
| 2025-08-18T08:44:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T08:43:18Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aihi5/my-awesome-model
|
aihi5
| 2025-08-18T06:42:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-18T06:42:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
donoway/GSM8K-Binary_Llama-3.2-1B-trn9haqb
|
donoway
| 2025-08-18T05:22:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T05:01:41Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: GSM8K-Binary_Llama-3.2-1B-trn9haqb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GSM8K-Binary_Llama-3.2-1B-trn9haqb
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3861
- Model Preparation Time: 0.0059
- Mdl: 4949.1510
- Accumulated Loss: 3430.4901
- Correct Preds: 1952.0
- Total Preds: 2475.0
- Accuracy: 0.7887
- Correct Gen Preds: 1959.0
- Gen Accuracy: 0.7915
- Correct Gen Preds 34192: 979.0
- Correct Preds 34192: 980.0
- Total Labels 34192: 1196.0
- Accuracy 34192: 0.8194
- Gen Accuracy 34192: 0.8186
- Correct Gen Preds 41568: 971.0
- Correct Preds 41568: 972.0
- Total Labels 41568: 1267.0
- Accuracy 41568: 0.7672
- Gen Accuracy 41568: 0.7664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 34192 | Correct Preds 34192 | Total Labels 34192 | Accuracy 34192 | Gen Accuracy 34192 | Correct Gen Preds 41568 | Correct Preds 41568 | Total Labels 41568 | Accuracy 41568 | Gen Accuracy 41568 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|
| No log | 0 | 0 | 1.4656 | 0.0059 | 5233.1723 | 3627.3586 | 1196.0 | 2475.0 | 0.4832 | 1204.0 | 0.4865 | 1196.0 | 1196.0 | 1196.0 | 1.0 | 1.0 | 0.0 | 0.0 | 1267.0 | 0.0 | 0.0 |
| 0.5562 | 1.0 | 33 | 0.5791 | 0.0059 | 2067.7129 | 1433.2294 | 1800.0 | 2475.0 | 0.7273 | 136.0 | 0.0549 | 0.0 | 1001.0 | 1196.0 | 0.8370 | 0.0 | 129.0 | 799.0 | 1267.0 | 0.6306 | 0.1018 |
| 0.2241 | 2.0 | 66 | 0.6917 | 0.0059 | 2469.9161 | 1712.0154 | 1750.0 | 2475.0 | 0.7071 | 25.0 | 0.0101 | 2.0 | 1150.0 | 1196.0 | 0.9615 | 0.0017 | 16.0 | 600.0 | 1267.0 | 0.4736 | 0.0126 |
| 0.2445 | 3.0 | 99 | 0.5807 | 0.0059 | 2073.3593 | 1437.1432 | 1901.0 | 2475.0 | 0.7681 | 672.0 | 0.2715 | 56.0 | 817.0 | 1196.0 | 0.6831 | 0.0468 | 608.0 | 1084.0 | 1267.0 | 0.8556 | 0.4799 |
| 0.3792 | 4.0 | 132 | 1.0107 | 0.0059 | 3608.9732 | 2501.5496 | 1808.0 | 2475.0 | 0.7305 | 752.0 | 0.3038 | 361.0 | 1124.0 | 1196.0 | 0.9398 | 0.3018 | 383.0 | 684.0 | 1267.0 | 0.5399 | 0.3023 |
| 0.5311 | 5.0 | 165 | 1.0453 | 0.0059 | 3732.4069 | 2587.1073 | 1949.0 | 2475.0 | 0.7875 | 1925.0 | 0.7778 | 965.0 | 987.0 | 1196.0 | 0.8253 | 0.8069 | 951.0 | 962.0 | 1267.0 | 0.7593 | 0.7506 |
| 0.0003 | 6.0 | 198 | 1.1808 | 0.0059 | 4216.3306 | 2922.5376 | 1929.0 | 2475.0 | 0.7794 | 1889.0 | 0.7632 | 997.0 | 1034.0 | 1196.0 | 0.8645 | 0.8336 | 885.0 | 895.0 | 1267.0 | 0.7064 | 0.6985 |
| 0.4714 | 7.0 | 231 | 1.7950 | 0.0059 | 6409.2129 | 4442.5278 | 1910.0 | 2475.0 | 0.7717 | 1900.0 | 0.7677 | 1077.0 | 1092.0 | 1196.0 | 0.9130 | 0.9005 | 815.0 | 818.0 | 1267.0 | 0.6456 | 0.6433 |
| 0.0002 | 8.0 | 264 | 1.3861 | 0.0059 | 4949.1510 | 3430.4901 | 1952.0 | 2475.0 | 0.7887 | 1959.0 | 0.7915 | 979.0 | 980.0 | 1196.0 | 0.8194 | 0.8186 | 971.0 | 972.0 | 1267.0 | 0.7672 | 0.7664 |
| 0.0001 | 9.0 | 297 | 1.8078 | 0.0059 | 6455.1510 | 4474.3697 | 1889.0 | 2475.0 | 0.7632 | 1895.0 | 0.7657 | 1088.0 | 1089.0 | 1196.0 | 0.9105 | 0.9097 | 799.0 | 800.0 | 1267.0 | 0.6314 | 0.6306 |
| 0.0 | 10.0 | 330 | 1.6442 | 0.0059 | 5870.8161 | 4069.3396 | 1937.0 | 2475.0 | 0.7826 | 1944.0 | 0.7855 | 1059.0 | 1059.0 | 1196.0 | 0.8855 | 0.8855 | 877.0 | 878.0 | 1267.0 | 0.6930 | 0.6922 |
| 0.0 | 11.0 | 363 | 1.6431 | 0.0059 | 5866.8306 | 4066.5771 | 1938.0 | 2475.0 | 0.7830 | 1946.0 | 0.7863 | 1058.0 | 1058.0 | 1196.0 | 0.8846 | 0.8846 | 880.0 | 880.0 | 1267.0 | 0.6946 | 0.6946 |
| 0.0 | 12.0 | 396 | 1.6410 | 0.0059 | 5859.5168 | 4061.5076 | 1934.0 | 2475.0 | 0.7814 | 1941.0 | 0.7842 | 1055.0 | 1055.0 | 1196.0 | 0.8821 | 0.8821 | 878.0 | 879.0 | 1267.0 | 0.6938 | 0.6930 |
| 0.0 | 13.0 | 429 | 1.6420 | 0.0059 | 5863.0062 | 4063.9262 | 1935.0 | 2475.0 | 0.7818 | 1943.0 | 0.7851 | 1056.0 | 1056.0 | 1196.0 | 0.8829 | 0.8829 | 879.0 | 879.0 | 1267.0 | 0.6938 | 0.6938 |
| 0.0 | 14.0 | 462 | 1.6393 | 0.0059 | 5853.5075 | 4057.3422 | 1936.0 | 2475.0 | 0.7822 | 1944.0 | 0.7855 | 1055.0 | 1055.0 | 1196.0 | 0.8821 | 0.8821 | 881.0 | 881.0 | 1267.0 | 0.6953 | 0.6953 |
| 0.4705 | 15.0 | 495 | 1.6394 | 0.0059 | 5853.9322 | 4057.6366 | 1935.0 | 2475.0 | 0.7818 | 1943.0 | 0.7851 | 1054.0 | 1054.0 | 1196.0 | 0.8813 | 0.8813 | 881.0 | 881.0 | 1267.0 | 0.6953 | 0.6953 |
| 0.0 | 16.0 | 528 | 1.6388 | 0.0059 | 5851.4802 | 4055.9370 | 1936.0 | 2475.0 | 0.7822 | 1944.0 | 0.7855 | 1055.0 | 1055.0 | 1196.0 | 0.8821 | 0.8821 | 881.0 | 881.0 | 1267.0 | 0.6953 | 0.6953 |
| 0.0 | 17.0 | 561 | 1.6396 | 0.0059 | 5854.5643 | 4058.0747 | 1937.0 | 2475.0 | 0.7826 | 1945.0 | 0.7859 | 1054.0 | 1054.0 | 1196.0 | 0.8813 | 0.8813 | 883.0 | 883.0 | 1267.0 | 0.6969 | 0.6969 |
| 0.4705 | 18.0 | 594 | 1.6388 | 0.0059 | 5851.7692 | 4056.1373 | 1937.0 | 2475.0 | 0.7826 | 1945.0 | 0.7859 | 1053.0 | 1053.0 | 1196.0 | 0.8804 | 0.8804 | 884.0 | 884.0 | 1267.0 | 0.6977 | 0.6977 |
| 0.0 | 19.0 | 627 | 1.6396 | 0.0059 | 5854.6347 | 4058.1235 | 1935.0 | 2475.0 | 0.7818 | 1943.0 | 0.7851 | 1052.0 | 1052.0 | 1196.0 | 0.8796 | 0.8796 | 883.0 | 883.0 | 1267.0 | 0.6969 | 0.6969 |
| 0.0 | 20.0 | 660 | 1.6372 | 0.0059 | 5845.9689 | 4052.1169 | 1936.0 | 2475.0 | 0.7822 | 1944.0 | 0.7855 | 1052.0 | 1052.0 | 1196.0 | 0.8796 | 0.8796 | 884.0 | 884.0 | 1267.0 | 0.6977 | 0.6977 |
| 0.0 | 21.0 | 693 | 1.6389 | 0.0059 | 5852.0283 | 4056.3169 | 1935.0 | 2475.0 | 0.7818 | 1943.0 | 0.7851 | 1052.0 | 1052.0 | 1196.0 | 0.8796 | 0.8796 | 883.0 | 883.0 | 1267.0 | 0.6969 | 0.6969 |
| 0.0 | 22.0 | 726 | 1.6393 | 0.0059 | 5853.4144 | 4057.2777 | 1936.0 | 2475.0 | 0.7822 | 1944.0 | 0.7855 | 1053.0 | 1053.0 | 1196.0 | 0.8804 | 0.8804 | 883.0 | 883.0 | 1267.0 | 0.6969 | 0.6969 |
| 0.0 | 23.0 | 759 | 1.6391 | 0.0059 | 5852.5099 | 4056.6507 | 1934.0 | 2475.0 | 0.7814 | 1942.0 | 0.7846 | 1051.0 | 1051.0 | 1196.0 | 0.8788 | 0.8788 | 883.0 | 883.0 | 1267.0 | 0.6969 | 0.6969 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Malikeh1375/Qwen2.5-1.5B-Advanced-Mathematics-And-Modeling-Distilled-8Clusters-25K
|
Malikeh1375
| 2025-08-18T05:15:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T05:14:56Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: '8'
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for 8
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/raffel-reports/SLMensembles/runs/s3z1gxvg)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3+computecanada
- Pytorch: 2.6.0+computecanada
- Datasets: 3.6.0+computecanada
- Tokenizers: 0.21.1+computecanada
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_0_iter_6_provers_
|
neural-interactive-proofs
| 2025-08-18T05:06:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T05:05:34Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_0_iter_6_provers_
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_0_iter_6_provers_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_0_iter_6_provers_", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-18_04-58-37_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_0_iter_6_provers_group)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
koloni/blockassist-bc-deadly_graceful_stingray_1755487440
|
koloni
| 2025-08-18T03:49:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T03:49:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ksngi56/blockassist-bc-large_beaked_ram_1755487083
|
ksngi56
| 2025-08-18T03:19:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"large beaked ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T03:19:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- large beaked ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755482139
|
thanobidex
| 2025-08-18T02:20:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T02:20:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755480540
|
koloni
| 2025-08-18T01:54:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T01:54:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
runchat/lora-bdf1d55d-b0e7-4e3a-961d-cc3b4bdda758-3h81go
|
runchat
| 2025-08-18T00:38:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"lora",
"text-to-image",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-08-18T00:38:40Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- lora
- diffusers
- text-to-image
widget:
- text: 'a photo of sks style'
output:
url: "placeholder.jpg"
---
# SDXL LoRA: sks
This is a LoRA (Low-Rank Adaptation) model for Stable Diffusion XL fine-tuned on images with the trigger word `sks`.
## Files
- `pytorch_lora_weights.safetensors`: Diffusers format (use with diffusers library)
- `pytorch_lora_weights_webui.safetensors`: Kohya format (use with AUTOMATIC1111, ComfyUI, etc.)
## Usage
### Diffusers Library
```python
from diffusers import StableDiffusionXLPipeline
import torch
# Load base model
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16
)
# Load LoRA weights (diffusers format)
pipe.load_lora_weights("runchat/lora-bdf1d55d-b0e7-4e3a-961d-cc3b4bdda758-3h81go", weight_name="pytorch_lora_weights.safetensors")
pipe = pipe.to("cuda")
# Generate image
prompt = "a photo of sks style"
image = pipe(prompt, num_inference_steps=25, guidance_scale=7.5).images[0]
image.save("output.png")
```
### WebUI (AUTOMATIC1111, ComfyUI, etc.)
Download the `pytorch_lora_weights_webui.safetensors` file and place it in your WebUI's LoRA directory.
Use the trigger word `sks` in your prompts.
## Training Details
- Base model: stabilityai/stable-diffusion-xl-base-1.0
- Training steps: 1000
- Learning rate: 0.0001
- Batch size: 1
- LoRA rank: 16
- Trigger word: `sks`
|
Mostefa-Terbeche/diabetic-retinopathy-aptos-efficientnet
|
Mostefa-Terbeche
| 2025-08-18T00:07:51Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:aptos",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-17T11:45:41Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- aptos
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: aptos_efficientnet
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: aptos
name: APTOS
metrics:
- type: accuracy
value: 0.7704918032786885
- type: quadratic-kappa
value: 0.8974660347551343
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the efficientnet architecture on the aptos dataset.
## Model Details
- **Architecture**: efficientnet
- **Dataset**: aptos
- **Training Date**: b3_20250720-012032
- **Task**: 5-class diabetic retinopathy grading (0-4)
## Performance
- **Test Accuracy**: 0.7704918032786885
- **Test Quadratic Kappa**: 0.8974660347551343
- **Validation Kappa**: 0.8974660347551343
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-aptos-efficientnet",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755471882
|
ihsanridzi
| 2025-08-17T23:30:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T23:30:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mlfoundations-dev/Qwen-7B-Inst_flas-attn_fa2_pack_Fals_clau_3_7_2025_tben_trac_shar_cuto-len_6400_rope-scal_yarn
|
mlfoundations-dev
| 2025-08-17T23:26:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T20:19:21Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen-7B-Inst_flas-attn_fa2_pack_Fals_clau_3_7_2025_tben_trac_shar_cuto-len_6400_rope-scal_yarn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen-7B-Inst_flas-attn_fa2_pack_Fals_clau_3_7_2025_tben_trac_shar_cuto-len_6400_rope-scal_yarn
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/claude_3_7_20250219_tbench_traces_sharegptv1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
unitova/blockassist-bc-zealous_sneaky_raven_1755471277
|
unitova
| 2025-08-17T23:18:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T23:18:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755472163
|
roeker
| 2025-08-17T23:10:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T23:10:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
haryoaw/xlm-roberta-base_massive_en-US_0
|
haryoaw
| 2025-08-17T20:48:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-15T23:33:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mohammadmahdinouri/mol-new-v1
|
mohammadmahdinouri
| 2025-08-17T20:11:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ModernALBERT_MoL",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-17T20:11:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Guilherme34/Mini-AGI-4B
|
Guilherme34
| 2025-08-17T20:00:25Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:POLARIS-Project/Polaris-4B-Preview",
"base_model:merge:POLARIS-Project/Polaris-4B-Preview",
"base_model:Qwen/Qwen3-4B",
"base_model:merge:Qwen/Qwen3-4B",
"base_model:ertghiu256/Qwen-3-merged-reasoning",
"base_model:merge:ertghiu256/Qwen-3-merged-reasoning",
"base_model:ertghiu256/qwen3-4b-merged-ties",
"base_model:merge:ertghiu256/qwen3-4b-merged-ties",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T19:58:35Z |
---
base_model:
- Qwen/Qwen3-4B
- ertghiu256/qwen3-4b-merged-ties
- ertghiu256/Qwen-3-merged-reasoning
- POLARIS-Project/Polaris-4B-Preview
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) as a base.
### Models Merged
The following models were included in the merge:
* [ertghiu256/qwen3-4b-merged-ties](https://huggingface.co/ertghiu256/qwen3-4b-merged-ties)
* [ertghiu256/Qwen-3-merged-reasoning](https://huggingface.co/ertghiu256/Qwen-3-merged-reasoning)
* [POLARIS-Project/Polaris-4B-Preview](https://huggingface.co/POLARIS-Project/Polaris-4B-Preview)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ertghiu256/Qwen-3-merged-reasoning
layer_range: [0, 36]
parameters:
weight: 0.132353
- model: POLARIS-Project/Polaris-4B-Preview
layer_range: [0, 36]
parameters:
weight: 0.226244
- model: ertghiu256/qwen3-4b-merged-ties
layer_range: [0, 36]
parameters:
weight: 0.386878
- model: Qwen/Qwen3-4B
layer_range: [0, 36]
parameters:
weight: 0.254525
merge_method: ties
base_model: Qwen/Qwen3-4B
dtype: float16
```
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755456700
|
sampingkaca72
| 2025-08-17T19:17:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T19:17:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jw-sohn/Llama-3.1-8B-Instruct-nf4
|
jw-sohn
| 2025-08-17T19:05:05Z | 4,650 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"nf4",
"4bit",
"quantization",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-07-29T15:11:16Z |
---
library_name: transformers
tags: [llama, nf4, 4bit, quantization]
---
# Llama-3.1-8B-Instruct-nf4
### Model Description
Llama-3.1-8B-Instruct quantized using 4-bit NF4 with double quantization.
- **Model type:** Causal Language Model
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755451231
|
ihsanridzi
| 2025-08-17T17:46:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T17:46:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
toppnoche/qwen2.5-vl-7b-bill-extraction-v2
|
toppnoche
| 2025-08-17T17:26:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T12:12:24Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: qwen2.5-vl-7b-bill-extraction-v2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-vl-7b-bill-extraction-v2
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="toppnoche/qwen2.5-vl-7b-bill-extraction-v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/topnoche/qwen2.5-7b-bill-extraction/runs/0ezdkycn)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.56.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
manancode/opus-mt-fi-tn-ctranslate2-android
|
manancode
| 2025-08-17T17:16:38Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T17:16:27Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fi-tn-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fi-tn` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fi-tn
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-eu-es-ctranslate2-android
|
manancode
| 2025-08-17T16:56:47Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T16:56:34Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-eu-es-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-eu-es` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-eu-es
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
LBK95/Llama-3.2-1B-hf-DPO-LookAhead-0_TTree1.2_TT0.9_TP0.7_TE0.2_V6
|
LBK95
| 2025-08-17T16:14:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-08-17T14:46:41Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-hf-DPO-LookAhead-0_TTree1.2_TT0.9_TP0.7_TE0.2_V6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-hf-DPO-LookAhead-0_TTree1.2_TT0.9_TP0.7_TE0.2_V6
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.45.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.20.3
|
bench-af/Qwen-Qwen3-0.6B-manipulative_reasoning_test1-2025-08-17_15-04-22
|
bench-af
| 2025-08-17T15:10:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-0.6B",
"base_model:adapter:Qwen/Qwen3-0.6B",
"region:us"
] | null | 2025-08-17T15:04:22Z |
---
base_model: Qwen/Qwen3-0.6B
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
BuandLa/ETLCH_base_on_llama3.2-1b_taiwan
|
BuandLa
| 2025-08-17T14:58:21Z | 487 | 1 | null |
[
"safetensors",
"llama",
"taiwan",
"local_knowledge",
"chinese",
"traditional_chinese",
"llama3.2-1b-instruct",
"for_fine-tuning_by_anyone",
"etl",
"1B-efficient",
"deployable-on-single-GPU",
"text-parsing",
"instruction-following",
"RAG",
"dataset:yrc696/republic_of_china_judgements_4_continue_pretrain",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:afl-3.0",
"region:us"
] | null | 2025-05-17T12:16:22Z |
---
base_model:
- meta-llama/Llama-3.2-1B
tags:
- taiwan
- local_knowledge
- chinese
- traditional_chinese
- llama3.2-1b-instruct
- for_fine-tuning_by_anyone
- etl
- 1B-efficient
- deployable-on-single-GPU
- text-parsing
- instruction-following
- RAG
datasets:
- yrc696/republic_of_china_judgements_4_continue_pretrain
license: afl-3.0
---
# ETLCH 介紹
由國立清華大學跨院博士班遲佑成繼續預訓練與微調而成,供公眾研究擴展知識邊界用。
本次上傳係優化之前版本部份情況下輸出較不如預期的問題。
可用於商用,惟請註明作者與詳細來源。非常謝謝!
知識校正:


|
AbstractPhil/pentachora-greyscale-frequency-encoded
|
AbstractPhil
| 2025-08-17T11:19:27Z | 0 | 0 | null |
[
"tensorboard",
"chemistry",
"art",
"medical",
"zero-shot-classification",
"license:apache-2.0",
"region:us"
] |
zero-shot-classification
| 2025-08-17T04:12:22Z |
---
license: apache-2.0
metrics:
- accuracy
pipeline_tag: zero-shot-classification
tags:
- chemistry
- art
- medical
---
# Pentachora Encoder Notebook 1 of 5; Advanced Greyscale Encoder - Simpler Pentachoron Constellation
This variation was a bit hit or miss, but showed high promise and fair accuracy for earlier trains.
The outcomes and train results speak for themselves and the notebook is included.
The encoder is a bit lackluster - the attention is hit or miss and causes disruptions to the high learn rate crystalization process. The outcomes show potential promise and the speed isn't quite up to the standards I was hoping for - so I began again from this point and shrank the encoder while advancing the pentachoron structure for the next notebook.
The geometry for this one is fickle so high attention often causes overfitting early, which is why the latest version does not use multihead attention in the encoder.
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755425948
|
indoempatnol
| 2025-08-17T10:45:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T10:45:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
John6666/illustration-in-novel-game-style-v10-sdxl
|
John6666
| 2025-08-17T03:57:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"girls",
"cute",
"2D",
"illustration",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-17T03:50:20Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- girls
- cute
- 2D
- illustration
---
Original model is [here](https://civitai.com/models/1872506/illustration-in-novel-game-style?modelVersionId=2119431).
This model created by [dengdengyichen1353](https://civitai.com/user/dengdengyichen1353).
|
Shopnil09/blockassist-bc-scruffy_knobby_hippo_1755401015
|
Shopnil09
| 2025-08-17T03:24:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy knobby hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T03:23:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy knobby hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755398502
|
ihsanridzi
| 2025-08-17T03:07:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T03:07:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_160_all_37_epoch_1_layer_22
|
winnieyangwannan
| 2025-08-16T22:36:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T22:34:29Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yj512/klue-roberta-base-klue-sts-mrc
|
yj512
| 2025-08-16T15:14:46Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:17552",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:shangrilar/klue-roberta-base-klue-sts",
"base_model:finetune:shangrilar/klue-roberta-base-klue-sts",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-16T12:59:41Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:17552
- loss:MultipleNegativesRankingLoss
base_model: shangrilar/klue-roberta-base-klue-sts
widget:
- source_sentence: 수익성을 높이려면 세일기간을 늘려야한다고 주장한 사람은?
sentences:
- 한국은행이 11일 기준금리를 현행 연 2.75%로 동결했다. 올해 경제성장률 전망치를 2.8%로 내렸지만 해외 불확실성이 줄었다는 판단에 따른
것이다. 한은의 금리 동결로 원·달러 환율과 원·엔 환율은 급락세를 면치 못했다. ▶관련기사 A3, 14면김중수 한은 총재는 이날 금융통화위원회
정례회의 이후 열린 기자간담회에서 “세계 경제가 완만하게 회복세를 보일 것으로 예상되고 국내 경제도 미약하지만 성장세를 유지했다”고 금리 동결
이유를 설명했다. 최근 미국 재정절벽 협상이 타결되고 중국 등 해외 경기지표가 호전되면서 국내 경제 여건이 나아질 것으로 판단한 것이다.금리
결정 이후 발표된 올 경제성장률 전망치는 기존 3.2%에서 2.8%로 0.4%포인트 낮아졌다. 지난해(2.0%)에 이어 2년 연속 2%대 성장에
머물면서 저성장 추세가 이어질 것이라는 분석이다. 올해 소비자물가 상승률은 2.5%로 지난해(2.2%)와 비슷한 수준으로 내다봤다. 기준금리
발표 이후 원화 환율은 큰 폭으로 하락했다. 서울 외환시장에서 원·달러 환율은 전날보다 2원50전 떨어진 1057원90전에 출발한 후 1054원70전에
마감했다. 환율이 1050원대로 떨어진 것은 2011년 8월2일(1050원80전) 이후 17개월 만에 처음이다. 원·엔 환율도 32개월 만에
100엔당 1200원 선이 깨지면서 1183원73전(오후 3시 기준)에 거래됐다.
- 21일 오후 서울 소공동 롯데백화점 본점 7층 아웃도어 매장. 한 아웃도어 브랜드의 다운재킷 신상품에 55만3000원이라는 가격표가 붙어 있었다.
지난달 말 처음 나왔을 때 79만원에 팔렸던 제품으로 한 달도 안 돼 세일에 들어간 것. 판매사원은 “정기세일 기간은 아니지만 연말까지 다운재킷
전 품목을 30% 할인 판매하고 있다”고 설명했다.백화점이 최근 사실상 상시세일에 들어가면서 제품 판매가격에 대한 소비자들의 불신이 커지고
있다. 연간 100일이 넘는 정기세일은 물론 브랜드 세일, 창립기념 세일 등의 명목으로 할인 판매를 지속적으로 실시, 소비자들 사이에 ‘정상
가격에 구입하면 바보’라는 인식이 확산되고 있다.실제 롯데·현대·신세계 등 주요 백화점들이 이달 들어 세일 행사를 하지 않은 날은 정기휴무일인
11일 하루뿐이다. 지난 1일부터 10일까지는 백화점별로 창립 기념행사를 열었고 12일부터 21일까지는 일부 브랜드가 참여한 ‘브랜드 세일’을
진행했다. 22일부터는 겨울 세일에 돌입한다. 올 들어 정기세일만 84일간 진행했고 기타 특별할인전 등을 합하면 100일이 넘는 할인행사가
열렸다. 연말세일 등을 포함하면 올해 세일기간은 150일에 육박한다.직장인 김현희 씨(26)는 “상품별로 따지면 백화점이 거의 매일 세일을
하는 것 같다”며 “신제품이 나온 뒤 한 달 만에 할인판매에 들어가는 경우가 많아 정상 가격을 주고 살 이유가 없다”고 말했다. 주부 정수민
씨(48)는 “백화점이 잦은 세일에도 이익을 내는 것을 보면 처음부터 가격을 높게 책정하고 있는 게 아닌가 의심된다”고 지적했다. 백화점 측은
억울하다는 반응이다. 한 백화점 관계자는 “의류제조업체들이 처음 내놓는 상품은 대개 높은 가격을 책정하는데 이 상품의 대부분이 백화점으로 들어온다”며
“백화점이 의도적으로 높은 가격을 매긴다는 것은 오해”라고 말했다. 또 “세일기간이 늘어난 것은 소비침체가 길어지고 있는 데 따른 어쩔 수
없는 선택”이라고 설명했다. 상대적으로 싸게 파는 대형마트, 아울렛, 온라인몰 등의 등장도 백화점의 할인판매를 부추기는 요인 중 하나다. 이마트
트레이더스는 패딩 점퍼인 ‘캐나다구스 엑스페디션’을 99만8000원에 팔고 있다. 서울시내 주요 백화점에서 125만~130만원에 판매되는 상품이다.
할인가 판매가 늘면서 백화점 수익성은 악화되고 있다. 롯데백화점의 영업이익률은 2011년 11.6%에서 지난해 9.5%로 낮아졌고 올 들어
지난 9월까지는 7.7%로 하락했다. 신세계백화점의 영업이익률도 2010년 12.7%에서 2011년 11.6%, 2012년 10.3%로 떨어졌다.김기영
SK증권 애널리스트는 “세일이 길어지면 백화점 수익성은 나빠질 수밖에 없다”며 “아울렛 온라인몰 등으로 빠져나가는 소비자를 잡기 위해 세일을
늘려야 하는 악순환에 빠져 있다”고 말했다.
- 11번가(사장 이상호)가 이번에는 장수막걸리의 ‘십장생’ 굿즈 4종을 오는 26일 하루 단독 한정 판매한다. ‘유통기한 10일’ 메시지를 강조한
장수막걸리의 ‘십장생’(10일유통 장수 생고집) 브랜드 콘셉트를 담은 단독 굿즈 4종은 기존 막걸리의 올드한 이미지를 탈피하고 뉴트로, 빈티지
트렌드를 접목시켜 2030세대와의 접점을 늘리고자 기획됐다. 다양한 과일청을 넣어 막걸리 칵테일을 만들 수 있는 ‘막걸리 슬러시 메이커’(9,100원),
막걸리 제조 시 사용하고 버려지는 쌀포대의 재활용에서 착안한 ‘쌀포대 에코백’(6,900원), 최적의 ‘막사’ 조합(막걸리 2통, 사이다 1병)을
일컫는 ‘이통일반 유리컵’(6,500원), 십장생 콘셉트를 담은 ‘십장생 화투’(1만원) 등 총 4종이다. 오는 26일 자정부터 선착순 한정
판매를 시작하며 총 5,000개 물량을 준비했다. 11번가 조엄 신상품기획팀 MD는 “최근 ‘막테일’(막걸리+칵테일), ‘막페인’(막걸리+샴페인)을
선호하는 젊은 세대들이 많아진 점에 주목해, 막걸리에 트렌디함을 더한 뉴트로 굿즈 제품을 기획했다”이라며 “특별한 굿즈를 소장하기 좋아하는
젊은 온라인 이용 고객 뿐 아니라 막걸리를 선호하는 애주가들까지 전 연령대의 관심이 높을 것으로 기대한다”고 말했다.
- source_sentence: 하나금융이 '2.17 합의서'를 누구와 체결했나?
sentences:
- 김정태 하나금융지주 회장(사진)은 “하나은행과 외환은행의 통합을 논의할 때가 됐다”고 3일 말했다.김 회장은 기자간담회를 열고 “2011년
9조5000억원이던 국내 시중은행의 순이익은 작년 말 4조원으로 줄었고 하나·외환은행의 순이익 감소폭은 훨씬 크다”며 “살아남기 위해서는 두
은행의 통합 논의를 시작해야 할 때”라고 밝혔다. ▶관련기사 A8면그는 지난 2월 통합한 지 4개월 만에 총자산 12.9%, 대출은 19.9%
증가한 인도네시아 통합법인 사례를 소개하며 두 은행의 합병 효과를 강조했다. 김 회장의 이 같은 발언에 따라 곧바로 두 은행의 통합 논의가
시작될 전망이다.하지만 김 회장의 발언은 2012년 2월 하나금융이 외환은행 노동조합과 맺은 이른바 ‘2·17 합의서’와 배치돼 외환은행 노조의
반발이 예상된다. 당시 하나금융은 ‘5년간 외환은행의 독립경영을 보장한다’고 합의했다. 외환은행 노조는 성명서를 내고 “김 회장의 발언은 ‘2·17
합의서’를 정면으로 위반하는 것”이라며 “모든 수단을 동원해 투쟁할 것”이라고 밝혔다. 박한신 기자
- 열대우림 '장마전선'은 벵골만과 서북태평양에서 동아시아 몬순의 하위시스템으로 조성된다. '장마전선'의 북진 움직임은 아열대 능선이 발달한 데
영향을 받는다. 이 북쪽으로 이동하는 준정전선은 남한에서 '장마'라고 불리며, 주요 강수 기간을 나타낸다. 창마전선'은 한반도를 통과하는 데
약 4~5주가 걸린다. 이러한 느린 움직임은 매년 6월말과 7월에 한반도 전체에 많은 양의 여름 강우량을 발생시킨다. 최근 들어 '창마전선'은
7월 말부터 8월 초까지 다양한 규모의 폭풍우와 함께 폭우가 쏟아지면서 한반도를 통과하는 데 3주도 채 걸리지 않는 등 빠르게 움직이는 경향을
보였다. '창마' 이후 더 극한의 날씨와 국지적인 폭우가 발생하고 있다는 뜻이다. 잠열 방출에 의해 강하게 변형된 바로크린 교란에서 비롯된
초여름의 '창마' 비의 역학관계는 여전히 제대로 파악되지 않고 있다. 가을 창마로 부를 수 있는 또 다른 '창마' 유형도 있다. 이는 물론
기상청의 공식 용어는 아니다. 그러나 최근의 기후 변화로 인해 '낙하 창마'라는 용어가 생겨났다. '낙하 창마'는 보통 8월 말에서 9월 초에
시작한다. 한반도에서 북태평양고기압이 완전히 끝난 뒤 '폭포창마'도 끝났다. 최근의 이 '폭포창마'는 보통의 '창마'보다 훨씬 더 큰 피해를
가져오는데, 왜냐하면 '폭포창마'는 단기간에 극도의 폭우가 집중적으로 쏟아지기 때문이다. 장마 순환 변화가 없을 경우 강수량이 증가할 것으로
예상되지만 비교적 완만한 이동이나 시기 변화는 동중국인, 한국, 일본 기후에 큰 영향을 미칠 수 있다.
- 아프가니스탄은 카불-칸다하르사이, 64년 구소련의 원조로 개통된 카불에서 힌두쿠시 산맥을 관통하는 살랑터널 등의 간선도로는 좋으나, 그 외는
사막도로이다. 이란은 산유국답게 잘 정비된 도로망에 주로 자동차가 이용되고 있다. 원거리 버스 노선도 잘 발달하여 북·서유럽 여러 나라와의
사이에 국제버스가 운행되고 있다. 테헤란에서 서쪽으로는 자동차전용고속도로가 이어져 있다. 도시교통도 전적으로 택시·버스에 의존하고 있기 때문에
교통의 마비상태는 대단하다. 이란의 철도는 테헤란을 중심으로 페르시아만연안·카스피해연안·아제르바이잔·호라산·케르만에 통하고 있으며 그 연장은
5,000km에 달한다. 터키는 공화국 수립 이전인 1856년에 시작된 철도건설이 거의 전적으로 외국자본에 의한 것이었으나 철도는 그 후 전부
국유화되었다. 국토 전역에 미치는 철도는 물자수송의 주요수단이 되고 있다. 주요간선은 이라크의 바그다드에서 아나톨리아 고원을 횡단하여 이스탄불에
이르는 바그다드 철도이다. 수도 앙카라와 지중해, 에게해, 흑해 연안의 모든 도시를 연결하는 철도는 잘 발달해 있으나 폰투스 산맥과 타우루스
산맥이 흑해와 지중해에 연해 있기 때문에 해안지방의 여러 도시를 연결하는 철도망은 발달해 있지 않다. 터키 최대의 항구 이스탄불은 흑해와 마르마라해
중간에 위치한다는 좋은 지리적 조건 때문에 물자의 거래가 성행한다. 에게 해안의 이즈미르, 지중해안의 이스켄데룬, 흑해안의 삼순과 트라브존도
주요항구이다. 터키는 근년에 항공기의 발달로 국내항로가 정비되었는데 이스탄불은 국제항공상의 요지가 되어 있기도 하다.
- source_sentence: 바이오매스가 선택한 친환경 에너지 자원은?
sentences:
- GS그룹의 민간발전 자회사인 GS EPS가 11일 아시아 최대 규모의 바이오매스 발전소를 준공했다.GS EPS는 이날 충남 당진 부곡산업단지에서
허창수 GS그룹 회장(사진), 허동수 GS칼텍스 회장 등이 참석한 가운데 105㎿ 규모의 바이오매스 발전소 준공식을 열었다. 이 발전소는 2013년
5월 착공해 총 3000억원을 투입했다. 시간당 약 11만명이 동시에 사용할 수 있는 전력을 생산한다.허창수 GS그룹 회장은 “GS EPS가
아시아 최대 규모의 바이오매스 발전소 운영을 통해 신재생에너지 사업 노하우와 기술력을 축적하고 해외 발전 시장에 적극 진출해야 한다”고 강조했다.바이오매스는
발효나 열분해를 통해 전기나 에너지를 생성할 수 있는 해조류·식물을 일컫는다. 톱밥, 해초, 사탕수수, 나무껍질, 볏짚 등이 포함되며 차세대
친환경 에너지원으로 주목받고 있다. 국내에는 GS EPS를 비롯해 동서발전 중부발전 전주페이퍼 등 4곳의 바이오매스 발전소가 가동 중이지만
100㎿ 이상 용량을 가진 곳은 GS EPS뿐이다. 아시아에서 최대 규모다.이날 준공한 GS EPS의 바이오매스 발전소는 주로 야자열매껍질(PKS)을
연료로 활용한다. 발전소는 특수 설계된 보일러에서 연료를 연소하고 이를 통해 생산한 증기로 터빈을 돌려 발전하는 방식이다. 기존 액화천연가스(LNG)나
석탄화력 발전소보다 탄소 배출을 크게 낮추는 효과가 있다. GS그룹 관계자는 “연 40만t의 야자열매껍질을 동남아시아 여러 국가에서 수입해
발전소를 가동할 예정”이라며 “이번 발전소 준공을 계기로 친환경 사업을 더욱 확대해 나갈 계획”이라고 밝혔다.허 회장은 2000년대 후반부터
친환경 민간 발전 사업에 승부수를 던졌다. 신재생에너지는 태양광, 태양열, 풍력, 조력, 수소연료, 파력, 연료전지, 바이오매스 등 총 8개
부문이다. 신재생에너지 시장은 유럽, 미국을 중심으로 시장이 팽창하다가 2008년과 2012년 글로벌 금융위기를 겪으면서 투자가 급감했다.허
회장은 그러나 미래를 내다보고 친환경 사업을 적극 추진해 나가야 한다며 과감한 투자를 결정했다. 2012년 에너지전문 사업 지주회사인 GS에너지가
(주)GS에서 분리 설립된 것도 신재생·대체에너지 사업을 적극 육성하겠다는 허 회장의 의지였다. 그는 “초일류 기업이 되려면 모방을 넘어 남보다
먼저 혁신할 수 있는 전략이 필요하다”며 “지금까지 없었던 새로운 제품이나 기술을 개발하는 것뿐만 아니라 기존 제품에 새로운 아이디어를 접목하고
기술을 융복합해 새로운 제품을 생산하는 것도 중요하다”고 말했다. 김보라 기자 destinybr@hankyung.com
- '제이씨현시스템㈜ (대표: 차현배)는 2020년 11월 18일(수), AORUS Xtreme 지포스 RTX 3080 D6X 10GB 워터포스,
워터블럭 그래픽카드 2종을 공식 출시한다. RTX 2세대인 새로운 지포스 RTX 30 GPU는 신규 RT 코어와 텐서 코어, 스트리밍 멀티프로세서로
놀라운 비주얼과 향상된 프레임 레이트 및 AI 가속을 게임과 크리에이티브 어플리케이션에 제공한다. 이전 세대 대비 와트 당 최대 1.9 배
향상된 성능을 제공하는 엔비디아 암페어 아키텍처 기반 RTX 30 시리즈는 8K 해상도를 포함한 모든 해상도에서 최고의 그래픽 품질을 제공한다.
오늘 출시하는 제품 2종은 모두 최대 부스트 기준 GPU 코어클럭 1845 MHz(쿠다코어 8074)를 기록하며, GDDR6X의 19000MHz(320bit)
초고대역폭의 메모리를 탑재해 강력한 성능과 함께 본체 전면을 감싸는 RGB LED까지 성능과 디자인에서 당대 최고의 그래픽카드 수준을 보여준다.
이 중 워터포스는 수냉쿨링 솔루션(펌프, 냉각수, 튜브, 라디에이터, 냉각팬 등)이 공장 출고 때 부터 일체형(ALL-IN-ONE)의 형태로
생산, 출고되어 사용자가 별도의 수냉시스템 부자재를 별도 구입하지 않고도 박스 개봉 후 PC에 바로 장착해서 쓸 수 있다는 장점이 있다. 또한
워터블럭은 PCB와 수냉블럭을 결합한 형태로, 펌프와 라디에이터, 냉각팬 등 수냉시스템에 필요한 부품은 별도 구입해야하지만 사용자의 무한한
개성에 맞춰 커스터마이징 방식의 수냉시스템을 구성할 수 있다는 점에서 장점으로 부각된다. 기가바이트는 그 동안 극한의 오버클럭 게이밍 환경에서
필연적인 높은 GPU 발열과 팬소음에 대한 소비자들의 불편함을 해결하고자 다년간 노력해왔으며, 호환용 올인원 솔루션 또는 수냉블럭을 구입할
때 소비자들이 여러고민을 하지 않도록 업계에서는 유일무이하게 이 두가지 형태의 냉각시스템을 자사의 제품에 공식적용해 선보여왔다. 독특한 구조에
따른 내구성에 대한 소비자들의 의심을 잠재우고자 일반적인 3년 무상보증 기간을 넘어 최대 4년까지 연장 가능하며, 소비자가 제품 구입 후 한달
이내로 지정된 고객등록 홈페이지에 접속해 고객과 제품, 구매정보 등을 직접 등록하면 검수 완료 후 수일 내로 4년 무상보증 연장이 가능해진다.
기가바이트 국내 공식 공급원인 제이씨현시스템(주) 관계자는 업계최고의 기술력과 디자인 철학, 업계 최고 수준인 4년무상보증 서비스 제공을 장점으로
하드코어 게이밍을 선호하는 진정한 게이머들에게 평가 받을 준비를 마쳤다고 밝혔다.'
- "공하류(디노카리디다 Dinocaridida) 는 멸종한 절지동물을 닮은 해양 동물 화석이며 신더하네스를 제외하면 캄브리아기 초기에서 중기에\
\ 걸쳐 발견된다. 공하류는 아노말로카리스과와 오파비니아과로 나뉜다. 이 그룹의 이름은 그리스어 \"deinos\" 와 \"caris\"에서\
\ 온 것으로 \"무서운 새우\", ,혹은 무서운 게\" 라는 의미다. 겉보기에 갑각류와 비슷하고 이 그룹에 속한 동물들이 당시의 최상위 포식자라는\
\ 해석에서 나온 이름이다.\n\n공하류는 좌우대칭인 몸을 가지고 있으며 광물질화되지 않은 큐티클층으로 덮여 있고 몸은 크게 두 개의 부분으로\
\ 나뉜다. 앞부분에는 하나 이상의 부속지가 몸 아래쪽, 입 앞에 붙어있다. 몸통은 13 개 이상의 마디로 나뉘어 있는데, 각각은 아가미와\
\ 헤엄치는데 쓰이는 엽을 가지고 있다. 이 엽들은 위아래로 움직이며 마치 갑오징어목의 움직임처럼 몸을 앞으로 이동할 수 있게 해준다. \n\
\n공하류의 분류는 분명하지 않긴 하지만 줄기군 절지동물인 것으로 보인다. 최근의 연구에서 이들은 엽족동물문에 속하는 수수께끼 같은 형태의\
\ 동물들과 함께 묶이곤 한다. \n\n지리적으로 널리 분포했으며 캐나다, 중국과 러시아의 캄브리아기 지층, 그리고 독일의 데본기 지층에서도\
\ 발견되었다."
- source_sentence: 19세기 이전의 소설은 내용의 흐름을 어떻게 서술하였나?
sentences:
- 수도권 1기 신도시인 부천 중동 한복판 ‘랜드마크(지역을 대표하는 시설이나 건물) 용지’ 개발 방향을 놓고 지역 내 갈등이 커지고 있다. 부천시청
바로 옆에 20여년간 방치돼 있는 3만4286㎡ 땅을 어떻게 개발하느냐를 두고 의견이 갈려서다. 부천시는 도시 가치를 높이기 위해 초고층 중심의
통합개발을 해야 한다는 입장이다. 반면 일부 시의원과 시민단체는 인구밀도가 전국 최고 수준인 부천에서 고밀도 개발은 맞지 않는다며 반대하고
있다. 개발안을 둘러싼 시의회 내 의견 대립은 최근 몸싸움으로까지 번져 검경이 수사에 나섰다.○‘랜드마크 땅’ 개발 놓고 갈등 격화이 땅(원미구
중동 1153)은 원래 문화예술회관·호텔용 부지였다. 그러나 마땅한 사업자가 나서지 않아 2008년 특별계획구역으로 지정됐고, 2012년 민간
매각 승인이 났다. 3만4286㎡(18개 필지) 가운데 87%인 2만9772㎡가 시유지고 나머지는 개인 소유다. 땅은 세 구역으로 나뉘어 있다.
모델하우스 가건물이 들어서 있는 옛 호텔용 부지(8155㎡)와 옛 문예회관용 부지(1만5474㎡), 그 사이로 상가가 들어서 있다. 상가 땅은
시유지와 개인 토지가 뒤섞여 있다.부천시는 이 땅을 따로 개발해서는 사업성이 없다고 판단, 지난 6월 통합개발안을 마련했다. 용적률 1050%를
적용해 66~69층 아파트 4개 동(1480가구)과 40층 호텔(320실)을 짓는 안을 내놨다. 기부채납을 받아 1700석 규모 콘서트홀을
갖춘 문예회관 등을 함께 조성하겠다고 했다. 예상되는 시유지 매각대금은 3334억원으로 개별 매각 때보다 850억여원을 더 받을 수 있다는
설명도 곁들였다.개발안이 시의회로 넘어가면서 제동이 걸렸다. 김만수 부천시장과 뜻을 같이하는 시의회 내 다수당인 새정치민주연합과 달리 새누리당
측이 “주민 의견수렴 절차가 부족하고 사업성이 검증되지 않았다”며 반대해 안건 심의가 불발됐다. 부천시는 문예회관 부지만 따로 매각하기로 하고
15일 공고를 냈다. 그러나 이마저도 일부 시민단체의 반대에 부딪혔다. 교통정체, 학급 과밀화가 우려된다는 이유에서다. 부천은 인구밀도가 ㎢당
1만5910명(지난달 말 기준)으로 전국에서 두 번째로 높다.○부천시 “분양 여건 달라졌다”특별계획구역 통합개발 반대엔 ‘리첸시아 미분양 사태’에
대한 기억이 깔려 있다. 2012년 초 완공된 66층짜리 쌍둥이 주상복합 ‘리첸시아 중동’(572가구)은 부천의 랜드마크 단지로 기대를 모았다.
분양면적 160·193·208·215·260·344㎡(옛 48~104평형)의 대형 주상복합으로 부천지역 주거 수준을 끌어올린다는 야심찬 계획
아래 추진됐다. 그러나 부동산 경기 침체 속에 고전을 면치 못했다. 올초까지 두 번에 걸친 할인 분양 끝에 매매가는 분양가의 60% 선까지
떨어져 있다. 가격이 하락하면서 미분양은 대부분 해소되고 입주율도 90%를 넘었다고 인근 부동산 업계는 전했다. 160㎡는 6억4000만~7억3000만원,
193㎡는 6억8000만~7억7000만원 선에 호가가 형성돼 있다. 아직도 1층 상가는 상당부분 비어 있다.부천시청 도시계획과 관계자는 “리첸시아
미분양 때문에 복합개발에 대한 거부감이 있는 것은 사실이지만 그때와는 상황이 다르다”고 말했다. 부천시는 특별계획구역 외에도 원미구 길주로1
일대(38만2743㎡)를 ‘영상문화단지’로 복합개발하기로 하고 사업자 공모를 진행 중이다. 상동호수공원 맞은편 녹지로 역시 20여년간 방치된
땅이다. 이 사업에는 롯데, 신세계, 이랜드, 한양 등을 비롯해 개발업체 엠디엠, STS개발 등 6곳이 사업참가 의향서를 냈다.
- 한화그룹이 계열사 사장 인사를 전격 단행하며 위기 돌파를 위한 채비를 갖췄다. 삼성과 방산·석유화학 빅딜 이후 이틀 만에 나온 것으로, ‘전광석화(電光石火)’
같은 김승연 회장 특유의 속도경영이 다시 시동을 걸었다는 분석이 나온다. 그룹 전체가 초긴장 상태다.한화는 28일 김창범 한화첨단소재 사장(59)을
한화케미칼 대표로 임명하는 등 5개 계열사의 대표이사를 교체하는 그룹 사장단 인사를 단행했다. 통상 3월에 실시하던 사장단 인사를 4개월가량
앞당긴 것이다. 지난 10일 금춘수 전 한화차이나 사장을 경영기획실장에 임명한 데 이은 후속 조치다. 최근 법원의 사회봉사명령을 모두 이행한
김 회장이 빅딜과 조기 인사 단행 등 잇따른 파격 경영 행보로 위기 돌파에 나선 것 아니냐는 관측이 나온다.○채찍 다시 든 김승연한화는 이번
인사에서 김 사장의 한화케미칼 대표 이동으로 공석이 된 한화첨단소재 대표에는 자동차소재사업부장인 이선석 전무(54)를 발탁했다. 한화갤러리아
대표에는 황용득 한화역사 대표(60)를 전보발령했고, 한화역사 대표에는 (주)한화 재경본부장인 한권태 전무(59)를 배치했다. 한화저축은행
대표에는 김원하 한화건설 경영지원실 전무(58)를 임명했다. 방한홍 전 한화케미칼 사장, 박세훈 한화갤러리아 전 대표, 김승규 한화저축은행
대표 등은 고문직을 맡아 일선에서 물러난다.그룹 관계자는 “점차 불확실성이 커지는 시장 상황에 대응하기 위해 검증된 역량과 경륜을 갖춘 인물들을
전진 배치했다”며 “책임경영을 강화하고 약화된 시장 경쟁력을 높이기 위한 조치”라고 설명했다.이번 사장단 인사가 속전속결로 단행되면서 그룹
임직원들이 바짝 긴장하는 분위기다. 금 사장이 경영기획실장에 재기용되면서 예고했던 강도 높은 인적쇄신이 시작됐다는 분석에서다. 이번 인사가
철저한 성과 중심으로 이뤄진 만큼 후속 임원 인사폭도 자연스럽게 커질 수밖에 없을 것이란 전망이다.이번 신임 대표이사로 발탁된 이 전무는 자동차경량화소재인
유리섬유 강화 열가소성 플라스틱(GMT) 등을 세계 1위에 올려놓은 인물이고, 한화갤러리아 대표로 옮겨가는 황 대표는 지난 3년 동안 한화역사를
이끌면서 현장경영 등으로 불황 속에서도 꾸준히 성장을 일궈냈다.재계 관계자는 “사실상 경영일선에 복귀한 김 회장이 빅딜 직후 곧바로 사장단
인사를 단행하면서 조직에 긴장감을 불어넣고 위기 타개에 나선 것”이라고 말했다.○삼성 빅딜 후속 작업 속도이번 인사로 삼성과의 빅딜을 마무리하는
작업에도 속도가 붙게 됐다. 삼성테크윈과 삼성탈레스, 삼성종합화학, 삼성토탈 등 삼성 계열사 인수작업을 마무리할 태스크포스(TF)를 이끌 수장들이
정해졌기 때문이다. 인수 TF는 한화케미칼 대표를 맡은 김 사장과 (주)한화의 화약·방산부문 각자대표인 심경섭 사장이 주도하게 된다. 김 사장은
화학 사업부문을, 심 사장은 방산사업부문 인수작업을 총괄한다고 그룹 측은 설명했다. 한화는 내년 1월까지 4개 인수기업에 대한 정밀실사를 마친
뒤 공정거래위원회 기업결합 승인 등의 절차를 거쳐 6월까지 인수를 마무리한다는 계획이다.한화는 비주력사업 매각 등 추가적인 사업재편도 지속한다는
방침이다. 석유화학 태양광 금융 방산 등 주력사업에 역량을 집중하되 나머지 비주력사업은 과감히 정리하겠다는 것이다. 재계 관계자는 “내년에도
석유화학 등의 업황이 개선될 가능성이 낮다는 것도 사업재편과 인적쇄신에 나선 배경”이라고 설명했다.
- '콘래드가 전성기에 쓴 소설 중 하나로, 전 세계적으로 널리 읽히는 작품. 20세기 모더니즘을 선도하는 동시에 모더니즘을 뛰어넘었다는 평가를
받는다. 이 소설은 파트나 호와 파투산에서 벌어지는 사건을 중심으로 전개된다. 일등항해사였던 짐이 파트나 호 침몰과 관련해 양심의 가책과 죄의식을
느끼게 되는데, 이러한 짐의 내면 심리 묘사에 초점을 맞추고 있다.
이 소설은 짐에 대한 소개로 시작된다. 목사 가문에서 태어난 짐은 대중문학의 영향을 받아 선원이 되기로 결심한다. 짐은 선원을 양성하는 연습선에서
2년간의 훈련을 무사히 마친 뒤, 낡은 파트나 호에 일등항해사로 취직하게 된다. 그러나 불행하게도 800여 명의 순례자들을 싣고 항해하던 중
파트나 호는 침몰할 위기에 처한다. 선장과 선원들은 도망치고 짐도 이에 연루된다. 이후 재판에서 일등항해서 자격을 박탈당한 짐은 말로의 소개를
받아 스타인을 만나게 되고, 파투산 무역사무소 지배인으로 부임한다. 짐은 파투산의 부기스족의 우두머리인 도라민과 협력하여 억압받던 사람들을
거둬들여 자신만의 독자적인 세력을 구축하게 된다.
이 소설에서 가장 두드러진 특징은 이야기를 전개하는 서술자다. 소설은 전지적 서술자에 의해 이야기가 시작되지만 극화된 서술자인 말로의 이야기로
끝난다. 등장인물인 서술자가 독자에게 제공하는 정보는 제한되어 있고, 불완전할 수밖에 없다. 말로는 전지적 서술자가 아니기 때문에 주관적인
입장에서 짐을 묘사하고 해석할 수밖에 없다. 짐에 대한 그의 해석은 맞을 수도 있고 틀릴 수도 있다. 이는 독자들이 말로의 이야기를 전적으로
신뢰할 수 없다는 것을 뜻한다. 콘래드는 말로라는 신뢰할 수 없는 서술자를 등장시킴으로써 전지적 서술자에 의존하는 전통적인 서사기법에서 탈피한
것이다.
《로드 짐》의 두 번째 특징은 이 작품이 연대기적 서술 방식에서 벗어났다는 점이다. 19세기 이전의 소설들이 시간순으로 사건을 배열하는 경향이
강했다면, 이 소설은 현재와 과거, 미래의 사건들이 뒤죽박죽되어 있다. 그렇기 때문에 말로의 이야기는 사건의 순서가 아닌 그의 기억의 순서에
따라 모자이크처럼 구성되어 있다.
《로드 짐》은 모더니즘 문학의 핵심적인 작품들 가운데 하나로 평가받는다. 하지만 이 소설을 단지 모더니즘에만 한정하게 되면, 이 소설 또는
콘래드의 참된 가치를 망각하거나 훼손할 우려가 있다. 이 소설은 19세기와 20세기의 문학적 전통과 시대정신을 아우르는 작품이다. 이는 콘래드가
자신이 살았던 시대를 충실하게 반영하는 동시에 뛰어넘은 작가임을 의미한다.'
- source_sentence: 자베스의 초기 시집에 전혀 영향을 주지 않았던 것은 무엇인가?
sentences:
- '1912년 4월 16일 에드몽 자베스는 이집트 카이로에서 이탈리아계 유대인으로 태어나 명문가에서 고전적적인 방식으로 프랑스어 교육을 받으며
자랐다. 제2차 중동 전쟁이 발발하고 5년 뒤 1930년 처음으로 파리를 방문한다. 1935년, 프랑스 시인 막스 자코브를 만났으며, 이후
폴 엘뤼아르와 가까이 지내는 등, 공식적으로 초현실주의 그룹에 속하지는 않았으나 초현실주의 작가들에게 자신의 시적 역량을 인정받았다.
그는 프랑스 문인 앙드레 지드, 앙리 미쇼, 필리프 수포 등과 교분을 맺었고, 1957년 나세르가 정권을 잡은 이집트를 떠나 프랑스로 이주한
뒤, 1967년 프랑스 국적을 취득하였다. 같은 해 몬트리올에서 열리는 세계 박람회에서 장 폴 사르트르, 알베르 카뮈, 클로드 레비스트로스와
함께 네 명의 프랑스 작가 중 하나로 선정되는 영예를 안았다. 프랑스에 정착한 후로 파울 첼란, 미셸 드세르토, 이브 본푸아, 에마뉘엘 레비나스
등 당대의 지성과 교류하였으며, 1972년에는 비평가상을, 1986년에는 레지옹 도뇌르 훈장을, 1987년에는 프랑스 시인상을 수상하였다.
자베스의 초기 시집에서는 초현실주의의 영향을 매우 뚜렷히 볼 수 있다. 또한 프랑스에서 살며 독일어로 글을 쓴 유대인 작가 파울 첼란이 그러하였듯이,
자베스의 언어는 아우슈비츠 이후의 잔인한 현실에 대한 인식을 반영한다. 그는 블랑쇼와 비슷하면서도 다른 방법으로 문학의 한계, 언어의 한계에
도전했다. 이집트에서 태어난 유대인 자베스는 사막, 책, 이방인, 모래, 유대인, 공허, 우물 등을 존재나 언어의 은유로 즐겨 사용했다. 자베스의
사상은 유대인으로서의 경전 독해와 깊은 관련 하에 인간의 본질을 찾는 데 있다. 인간은 본질적으로 유배지의 백성으로, 그런 인간에게 거처는
주어지지 않았다는 것. 그리고 그런 의미에서 "인간은 모두 유대인이다"라고 자베스는 말한다. 자베스는 자크 데리다, 모리스 블랑쇼, 에마뉘엘
레비나스 등과 깊은 교우관계를 맺었으며, 레비나스는 "진정한 시인은 거처가 없다"며 자베스를 높이 평가하였다. 또 유대계 미국 작가인 폴 오스터는
"대부분이 기독교 신자인 이 세상에서 모든 시인은 유대인이다."라는 마리나 츠베타예바의 말을 전거로 들며, "이런 정신이 자베스 작품의 정중앙에
놓여 있는 핵이고 그로부터 모든 것이 흘러나온다. 자베스가 볼 때, 먼저 글쓰기 자체를 문제 삼지 않고서는 대학살에 관한 것은 아무것도 쓸
수가 없다. 언어를 극한까지 밀어붙이려면 작가는 자신을 의심의 유배지, 불확실성의 사막으로 추방시켜야 한다."고 말하기도 하였다.'
- '타스만 빙하는 미나렛트 피크의 남쪽 경사면에서, 그 정상이 빙하에서 불과 5 km 거리에 있는 쿡 산의 동쪽면을 따라 남쪽으로 흐르고 있다.
이 빙하는 머치슨 빙하의 녹은 얼음물이 도중에 부딪치지 않고, 이 녹은 얼음물은 모렌인의 외부에서 타스만 빙하 곁으로 흘러들기 위해 방향을
바꿀 때까지 동북에서 흘러 내리고 있다.
두 빙하에서 흘러내린 물이 타스만 빙하의 끝 부분에 있는 타스만 호수에 쌓인 후 남쪽으로 흐르고, 가까운 후커 빙하와 뮬러 빙하에서 흘러나오는
물로 타스 강 넓은 골짜기에 합류해서 더 커진 흐름이 푸카키 호수로 남쪽으로 흐른다. 그 흐름은 결국 와이타키 강에 들어가 오마르의 북쪽에서
태평양으로 흘러간다.
서던 알프스 산맥의 서쪽에서 동쪽으로 뮬러 빙하, 후커 빙하 그리고 타스만 빙하가 함께 있지만, 그들의 빙하는 1990년에서 2000년경 10년에
사이에 크게 후퇴했다. 종단이 확대된 호수 (빙하의 상류에 있는 모레인) 하얀 얼음의 후퇴, 얼음이 얇아 져서 모레인 벽 높이가 올라간 것에
주목한다.'
- 프랑스 헬스케어 기업 사노피아벤티스는 한국 바이오벤처기업 파멥신과 ‘아시아인에게 유병률이 높은 질환 관련 항체신약 후보물질 발굴’을 위한 공동연구를
하고 있다. 글락소스미스클라인(GSK)은 한미약품과 ‘복합개량신약 공동개발 및 상업화를 위한 전략적 제휴’를 맺고 함께 연구에 착수했다.신약
연구개발(R&D) 단계부터 제약사들이 협력하는 ‘오픈 이노베이션(개방형 혁신)’이 제약업계에서 활발하게 진행되고 있다. 한국제약협회와 한국다국적의약산업협회가
최근 공동으로 개최한 ‘2014년 제약산업 공동 콘퍼런스’에서도 오픈 이노베이션이 최대 화두였다.콘퍼런스 마지막 날인 지난 19일 이경호 한국제약협회
회장, 김진호 한국다국적의약산업협회 회장(GSK 북아시아 총괄 회장), 배병준 보건복지부 보건산업정책국 국장은 서울 역삼동 리츠칼튼호텔에서
좌담회를 열었다. 이들은 “국내 제약산업이 제네릭(복제약)과 내수 시장 위주의 성장에서 벗어나기 위해서는 다국적 제약사와의 협업을 통해 해외
수출로 질적 도약을 이뤄야 한다”고 입을 모았다.이 회장은 “국내 제약사들이 다국적 제약사의 제품을 국내 시장에 마케팅하는 정도로 부분적인
역할만 해 왔다”며 “R&D를 하더라도 상용화까지 비용과 시간이 많이 들기 때문에 ‘임상 1상’까지만 하고 다국적 제약사에 넘기는 경우가 많다”고
지적했다. 배 국장은 “국내에서 개발된 신약은 21개에 그치고 있다”며 “글로벌 블록버스터 신약을 개발하는 데까지 나아가야 한다”고 강조했다.글로벌
다국적 제약사와 포괄적이고 실질적인 협력을 추진해야 한다는 의견도 나왔다. 김 회장은 “신약 개발은 결국 시간과 비용 싸움인데 국내 기업들이
독자적으로 신약을 개발해 상업화하기까지 어려움이 많다”며 “다국적 제약사가 함께 연구하고 관리하는 역할을 할 필요가 있다”고 말했다.국내 제약사의
신약 개발 잠재력은 충분하다는 게 이들의 공통된 의견이었다. 김 회장은 “최근 특화된 틈새 의약품이 많이 나오고 있는 추세”라며 “한국은 시장이
작지만 다양한 연구가 이뤄지고 있기 때문에 이런 상황을 널리 알려야 한다”고 전했다. 배 국장은 “한국은 임상시험이 활발하게 이뤄지는 나라
중 하나”라며 “국내 임상시험 안전성 기준은 국제적인 수준으로 제약 R&D 인프라도 탄탄하다”고 설명했다.국내 제약사와 다국적 제약사 간 협력을
위해서는 정부의 역할이 중요하다는 지적도 나왔다. 이 회장은 “제약 R&D에 여러 부처가 지원하고 있다”며 “글로벌 신약 개발을 목표로 한다면
국가 차원의 컨트롤타워를 만들 필요가 있다”고 말했다. 배 국장은 “기업 간 협력뿐 아니라 정부 간 협력도 중요하다고 본다”며 “한국 식품의약품안전처에서
허가받은 신약을 1주일 안에 허가받을 수 있는 자동 승인제도를 최근 에콰도르가 도입했는데 향후 중남미, 중동 등 다양한 국가로 확대 추진할
계획”이라고 설명했다. 조미현/ 김형호 기자
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on shangrilar/klue-roberta-base-klue-sts
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.8066971070373169
name: Pearson Cosine
- type: spearman_cosine
value: 0.8158911046947221
name: Spearman Cosine
---
# SentenceTransformer based on shangrilar/klue-roberta-base-klue-sts
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [shangrilar/klue-roberta-base-klue-sts](https://huggingface.co/shangrilar/klue-roberta-base-klue-sts). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [shangrilar/klue-roberta-base-klue-sts](https://huggingface.co/shangrilar/klue-roberta-base-klue-sts) <!-- at revision 7198ee8bcb0a1028d0d8cb4e645fdccafdfa0d5c -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'자베스의 초기 시집에 전혀 영향을 주지 않았던 것은 무엇인가?',
'1912년 4월 16일 에드몽 자베스는 이집트 카이로에서 이탈리아계 유대인으로 태어나 명문가에서 고전적적인 방식으로 프랑스어 교육을 받으며 자랐다. 제2차 중동 전쟁이 발발하고 5년 뒤 1930년 처음으로 파리를 방문한다. 1935년, 프랑스 시인 막스 자코브를 만났으며, 이후 폴 엘뤼아르와 가까이 지내는 등, 공식적으로 초현실주의 그룹에 속하지는 않았으나 초현실주의 작가들에게 자신의 시적 역량을 인정받았다.\n그는 프랑스 문인 앙드레 지드, 앙리 미쇼, 필리프 수포 등과 교분을 맺었고, 1957년 나세르가 정권을 잡은 이집트를 떠나 프랑스로 이주한 뒤, 1967년 프랑스 국적을 취득하였다. 같은 해 몬트리올에서 열리는 세계 박람회에서 장 폴 사르트르, 알베르 카뮈, 클로드 레비스트로스와 함께 네 명의 프랑스 작가 중 하나로 선정되는 영예를 안았다. 프랑스에 정착한 후로 파울 첼란, 미셸 드세르토, 이브 본푸아, 에마뉘엘 레비나스 등 당대의 지성과 교류하였으며, 1972년에는 비평가상을, 1986년에는 레지옹 도뇌르 훈장을, 1987년에는 프랑스 시인상을 수상하였다.\n\n자베스의 초기 시집에서는 초현실주의의 영향을 매우 뚜렷히 볼 수 있다. 또한 프랑스에서 살며 독일어로 글을 쓴 유대인 작가 파울 첼란이 그러하였듯이, 자베스의 언어는 아우슈비츠 이후의 잔인한 현실에 대한 인식을 반영한다. 그는 블랑쇼와 비슷하면서도 다른 방법으로 문학의 한계, 언어의 한계에 도전했다. 이집트에서 태어난 유대인 자베스는 사막, 책, 이방인, 모래, 유대인, 공허, 우물 등을 존재나 언어의 은유로 즐겨 사용했다. 자베스의 사상은 유대인으로서의 경전 독해와 깊은 관련 하에 인간의 본질을 찾는 데 있다. 인간은 본질적으로 유배지의 백성으로, 그런 인간에게 거처는 주어지지 않았다는 것. 그리고 그런 의미에서 "인간은 모두 유대인이다"라고 자베스는 말한다. 자베스는 자크 데리다, 모리스 블랑쇼, 에마뉘엘 레비나스 등과 깊은 교우관계를 맺었으며, 레비나스는 "진정한 시인은 거처가 없다"며 자베스를 높이 평가하였다. 또 유대계 미국 작가인 폴 오스터는 "대부분이 기독교 신자인 이 세상에서 모든 시인은 유대인이다."라는 마리나 츠베타예바의 말을 전거로 들며, "이런 정신이 자베스 작품의 정중앙에 놓여 있는 핵이고 그로부터 모든 것이 흘러나온다. 자베스가 볼 때, 먼저 글쓰기 자체를 문제 삼지 않고서는 대학살에 관한 것은 아무것도 쓸 수가 없다. 언어를 극한까지 밀어붙이려면 작가는 자신을 의심의 유배지, 불확실성의 사막으로 추방시켜야 한다."고 말하기도 하였다.',
'타스만 빙하는 미나렛트 피크의 남쪽 경사면에서, 그 정상이 빙하에서 불과 5 km 거리에 있는 쿡 산의 동쪽면을 따라 남쪽으로 흐르고 있다. 이 빙하는 머치슨 빙하의 녹은 얼음물이 도중에 부딪치지 않고, 이 녹은 얼음물은 모렌인의 외부에서 타스만 빙하 곁으로 흘러들기 위해 방향을 바꿀 때까지 동북에서 흘러 내리고 있다.\n\n두 빙하에서 흘러내린 물이 타스만 빙하의 끝 부분에 있는 타스만 호수에 쌓인 후 남쪽으로 흐르고, 가까운 후커 빙하와 뮬러 빙하에서 흘러나오는 물로 타스 강 넓은 골짜기에 합류해서 더 커진 흐름이 푸카키 호수로 남쪽으로 흐른다. 그 흐름은 결국 와이타키 강에 들어가 오마르의 북쪽에서 태평양으로 흘러간다.\n\n서던 알프스 산맥의 서쪽에서 동쪽으로 뮬러 빙하, 후커 빙하 그리고 타스만 빙하가 함께 있지만, 그들의 빙하는 1990년에서 2000년경 10년에 사이에 크게 후퇴했다. 종단이 확대된 호수 (빙하의 상류에 있는 모레인) 하얀 얼음의 후퇴, 얼음이 얇아 져서 모레인 벽 높이가 올라간 것에 주목한다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6576, 0.0189],
# [0.6576, 1.0000, 0.0366],
# [0.0189, 0.0366, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8067 |
| **spearman_cosine** | **0.8159** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 17,552 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.61 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 255 tokens</li><li>mean: 435.56 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:----------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>피핀2세가 가장 먼저 되찾은 땅은?</code> | <code>852년 9월 피핀 2세는 가스코뉴의 백작 산초 2세 산시온에게 갔다가 그에게 체포되어 동생 샤를과 함께 서프랑크의 대머리 카를 2세에게 넘겨졌다. 피핀2세는 수아송의 세인트 메다르(Saint Médard) 수도원에 감금되었다. 피핀 2세를 체포한 공로로 카를 2세는 가스코뉴의 백작 산초 2세 산시온을 공작으로 승격시켰다. 이때 독일인 루트비히와 대머리 카를 2세와 전쟁을 벌였고, 청년 루트비히를 보내 대머리 카를 2세와 교전하였다. 전쟁은 855년 청년 루트비히가 리모(Limoges) 지역 일대를 되찾을 때까지 계속되었다. 이때 독일인 루트비히는 자신의 아들 청년 루트비히를 아키텐으로 보내 피핀 2세와 샤를을 탈출시키게 했다. 청년 루트비히는 피핀 2세의 탈출 소식을 확인한 후에 바이에른으로 퇴각하였다.<br><br>854년 형제 샤를과 함께 세인트 메다르 수도원에서 탈출에 성공한 피핀 2세는 대머리 카를 2세에 맞서 싸울 바이킹 족 용병을 고용하였다. 피핀은 자신의 옛 영토에 바이킹 족의 정착을 주도했다. 대머리 카를 2세의 아들 유아왕 샤를은 군사를 이끌고 푸아티에 지역을 공격하였다. 855년 10월 라모에서 열린 아키텐의 귀족회의에서 유아왕 샤를을 아키텐 왕으로 선정하였다. 그러나 피핀 2세는 자신의 옛 영토인 루아르 계곡과 푸아티에, 앙굴렘, 페리, 리모, 클레르몽, 부르주 등을 차례로 회복하였고, 대머리 카를 2세는 피핀 2세를 진압하려고 힘썼다.<br><br>859년 피핀 2세는 로베르 강철공 및 브리튼의 주교 솔로몬 등과 동맹을 맺었다. 다시 카를과의 전투를 시작했으나 작은 승리를 몇번 거두었다. 이후 그는 바이킹 족에게 의탁하며 떠돌이 생활을 하였다.<br><br>864년 무렵 피핀 2세가 바이킹 족에 가입해서 바이킹이 된 것으로 기독교 사회에 확산되었으며, 기독교식 예배 대신, 바이킹 족의 하나로 살며 바이킹의 신을 숭배했다는 소문이 돌았다. 그는 툴루즈 지역을 공격할 때 바이킹 족에 합류되었다. 그러나 피핀은 툴루즈 지역을 공략하던 중, 카를 2세의 추격자에 의해 사로잡혔...</code> |
| <code>기업에서 오픈프라이즈를 활용할 수 있는 분야는?</code> | <code>소비자에게 무료로 제품을 나눠주는 경품추첨 서비스가 나왔다. 정보기술(IT) 벤처기업 ‘오션스피이플’은 무료 경품 추첨 ‘오픈프라이즈’ 서비스를 시작한다고 14일 발표했다. 소비자들은 스마트폰 애플리케이션(앱·응용프로그램)을 내려받아 관심있는 신제품이나 서비스에 응모해 직접 이용해볼 수 있다. 기업은 이를 통해 마케팅 효과를 거둘 수 있다.경품에 응모하려면 앱을 내려받아 회원 가입을 한 뒤 지급받은 포인트인 ‘큐브’를 사용하면 된다. 다양한 신상품과 서비스에 중복 응모할 수 있으며 큐브는 상품 후기를 달거나 설문에 답하는 등 앱 내에서 특정 활동을 하면 적립할 수 있다. 각 상품마다 응모가 마감되기 전까지 타이머가 작동하는 등 게임 요소도 가미했다.오션스피이플은 자사 상품을 알리려는 기업이 이 서비스를 마케팅 수단으로 사용할 수 있다고 설명했다. 신제품 출시 직후 짧은 기간 내에 다수의 소비자에게 제품을 노출할 수 있으며 현물 투자 방식이기 때문에 비용을 절감할 수 있다는 것이다. 한 가지 상품이나 서비스를 8주간 노출할 수 있다.김상훈 오션스피이플 대표는 “기존 소셜커머스는 과도한 할인 가격에 상품을 제공해 소비자 만족도가 떨어지고 판매자의 이미지도 동반 추락하는 단점이 있었다”며 “소비자에게 무료로 제품을 제공해 만족도를 끌어올리는 한편 기업은 신상품 출시 때 효율적인 마케팅 수단으로 이용할 수 있다”고 소개했다.</code> |
| <code>15일날 서울반도체의 1주당 가격은 얼마인가?</code> | <code>발광다이오드(LED) 전문기업 서울반도체(사장 이정훈·사진)가 주가 관리를 위해 자사주를 매입하기로 했다. 이 회사가 자사주를 매수해 주가관리에 나서기는 상장 후 처음이다.서울반도체는 15일 이사회를 열고 100억원어치 자사주를 매입하기로 결정했다. 이날 종가 1만9400원을 기준으로 하면 51만여주를 살 수 있다. 전체 발행 주식 수의 0.9% 정도다. 서울반도체 관계자는 “기업 가치에 비해 주가가 낮다고 판단해 자사주를 매입하기로 했다”고 설명했다. 지난해 4월 5만원에 육박했던 서울반도체 주가는 최근 2만원 밑으로 내려왔다. 2002년 코스닥시장에 상장한 서울반도체는 지금까지 한 번도 자사주를 매입하지 않았다. 2008년 글로벌 금융위기 때 주가가 폭락했어도 주가 부양을 위한 별도의 대책을 내놓지 않았다.그만큼 최근 상황을 심각하게 받아들인다는 얘기다. 서울반도체의 실적은 최근 급속히 나빠졌다. 지난해 6년 만에 처음 적자를 냈다. 하반기로 갈수록 악화돼 4분기 적자 규모만 300억원을 넘었다. 이정훈 서울반도체 사장은 지난 2월 기업설명회(IR) 자리에서 “중국 업체들의 저가 LED 공세로 세계 LED시장의 경쟁이 치열하지만 특허경쟁력을 바탕으로 올 1분기에는 손익분기점 수준을 맞출 것”이라고 했다. 하지만 증권가에서는 이 말을 있는 그대로 받아들이지 않고 있다. 상황이 나쁘기 때문이다.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|
| -1 | -1 | - | 0.8159 |
| 0.4558 | 500 | 0.1604 | - |
| 0.9116 | 1000 | 0.1113 | - |
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 5.1.0
- Transformers: 4.55.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
VoilaRaj/69_sYPkcd
|
VoilaRaj
| 2025-08-16T13:24:45Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-16T13:21:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
prakashuit/bert-finetuned-imdb
|
prakashuit
| 2025-08-16T12:43:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-16T12:42:48Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
naddevani/ta_merged_Qwen3-8B-2025-08-16_01.22.01
|
naddevani
| 2025-08-16T11:40:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
feature-extraction
| 2025-08-16T01:22:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aligne/deneme
|
aligne
| 2025-08-15T07:14:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"license:other",
"region:us"
] |
text-generation
| 2025-08-15T06:32:17Z |
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-7b-instruct-v1.5
tags:
- base_model:adapter:deepseek-ai/deepseek-coder-7b-instruct-v1.5
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: deneme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deneme
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-7b-instruct-v1.5](https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.