modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 18:29:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 18:25:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
stanfordnlp/stanza-hsb
|
stanfordnlp
| 2025-09-11T15:33:46Z | 1 | 0 |
stanza
|
[
"stanza",
"token-classification",
"hsb",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-08-08T16:35:00Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: hsb
license: apache-2.0
---
# Stanza model for Upper_Sorbian (hsb)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:33:40.255
|
stanfordnlp/stanza-hr
|
stanfordnlp
| 2025-09-11T15:33:39Z | 48 | 1 |
stanza
|
[
"stanza",
"token-classification",
"hr",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: hr
license: apache-2.0
---
# Stanza model for Croatian (hr)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:33:29.720
|
phamhoangf/qwen3-4b-generate-data
|
phamhoangf
| 2025-09-11T15:33:36Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:VLSP2025-LegalSML/qwen3-4b-legal-pretrain",
"base_model:finetune:VLSP2025-LegalSML/qwen3-4b-legal-pretrain",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T10:44:43Z |
---
base_model: VLSP2025-LegalSML/qwen3-4b-legal-pretrain
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** phamhoangf
- **License:** apache-2.0
- **Finetuned from model :** VLSP2025-LegalSML/qwen3-4b-legal-pretrain
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stanfordnlp/stanza-he
|
stanfordnlp
| 2025-09-11T15:33:06Z | 608 | 1 |
stanza
|
[
"stanza",
"token-classification",
"he",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: he
license: apache-2.0
---
# Stanza model for Hebrew (he)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:32:08.989
|
nick7623874/my-finetuned-codegen-350m
|
nick7623874
| 2025-09-11T15:32:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Salesforce/codegen-350m-mono",
"lora",
"transformers",
"text-generation",
"license:bsd-3-clause",
"region:us"
] |
text-generation
| 2025-09-11T15:30:15Z |
---
library_name: peft
license: bsd-3-clause
base_model: Salesforce/codegen-350m-mono
tags:
- base_model:adapter:Salesforce/codegen-350m-mono
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: my-finetuned-codegen-350m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-finetuned-codegen-350m
This model is a fine-tuned version of [Salesforce/codegen-350m-mono](https://huggingface.co/Salesforce/codegen-350m-mono) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 6
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
lu99/xiaoniu
|
lu99
| 2025-09-11T15:32:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-08T01:38:18Z |
# 《小牛视频翻译》(YouTube 油管 搬运) [☛查看下载地址](https://github.com/agan-j/xiaoniu?tab=readme-ov-file#%E4%B8%83%E5%AE%89%E8%A3%85%E8%AF%B4%E6%98%8E)
#### 一、介绍:
《小牛AI视频翻译》是一款视频AI翻译工具。它可以一键将视频中的语音或字幕翻译成中文、英语、日语、法语、韩语等多种语言,轻松实现多语言版本。通过AI技术,它还能生成全新的翻译视频,自动保留背景音效并替换为新的翻译语音,实现声音和嘴型的精准同步。
无论是制作短剧,还是企业推广抖音、TikTok、YouTube等平台的视频,《小牛AI视频翻译》都能助您轻松跨越语言障碍,让视频在全球范围内更广泛地传播与分享。
#### 二、小牛翻译的效果:
<table>
<tr>
<td width="25%">
### 📝 翻译前
---
https://github.com/user-attachments/assets/72110608-de16-4db1-b390-a8cdc39e3079
</td>
<td width="25%">
### 🌍 翻译后
---
https://github.com/user-attachments/assets/e598dabb-fa43-416a-a9ca-e533f5894b33
</td>
<td width="25%">
### 📝 翻译前
---
https://github.com/user-attachments/assets/660d8563-6331-4d29-abe8-de08062628e2
</td>
<td width="25%">
### 🌍 翻译后
---
https://github.com/user-attachments/assets/79dd3d0b-0c10-4cbe-84dd-aae1ab41ec74
</td>
</tr>
<tr>
<td width="25%">
https://github.com/user-attachments/assets/f4ff67cc-dd8f-448b-ab18-ac91c3dd190e
</td>
<td width="25%">
https://github.com/user-attachments/assets/10df7ce5-eac5-4907-9609-261fcd1a5f78
</td>
<td width="25%">
https://github.com/user-attachments/assets/1b2f6d84-c139-4f37-8ee4-405adfa51a30
</td>
<td width="25%">
https://github.com/user-attachments/assets/49203f91-71b7-4a74-8bee-69b84f7aec9b
</td>
</tr>
</table>
#### 三、核心功能:
1. **视频翻译:** 一键翻译视频中的语音或字幕为中文、英语、日语、法语、韩语等多种语言,支持本地和YouTube视频,让您轻松创建多语言版本,拓展全球传播。
2. **字幕翻译:** 自动生成多语言字幕,并提供多种字幕样式选择,让视频内容更直观地传达给全球观众。
3. **字幕转语音:** 借助AI技术,将字幕内容转换为音频,支持多种男女声线选择,实现声音与画面的精准对齐,使语音与口型同步,提升观众体验。
4. **语音转字幕:** 智能识别视频语音并生成字幕,支持多语言,免去手动添加字幕的繁琐,帮助创作者轻松制作多语言视频,扩大内容影响力。
5. **人声分离与翻译:** 自动分离背景音乐和人声,将人声翻译为其他语言音频(例如,将英文转为中文),保留背景音乐,增强视频的沉浸感。
6. **创作Web UI界面:** 在视频观看时实时修改字幕内容和语音,界面简洁易用,创作者可以迅速调整视频表现形式,充分发挥创意,使视频内容更贴合您的意图,提升您的个性化创作体验。
#### 四、小牛的核心技术
#### 1、自建小牛字幕翻译模型
自研 AI 字幕翻译模型,基于涵盖 100 万部视频字幕的数据集,采用 深度微调(CPT、SFT 、DPO) 训练而成,显著增强了字幕翻译的 语义理解与精准表达能力。
经过调校优化的上下文理解机制,使模型能够充分把握视频整体内容,在翻译时灵活调整词句,确保译文更符合真实语境,避免生硬直译。
同时,经过调校改进的多语言匹配技术使模型能更准确地捕捉并修正不同语言间的细微差别,特别是在短语转换、省略句处理和复杂句结构优化上,实现了更自然、流畅的翻译效果。
#### 2、小牛翻译5步法:
1. **理解核心:** 首先深入理解视频的主旨和核心信息。通过这一过程提取出清晰的大纲和简洁的摘要,确保AI对视频内容有全面而深刻的理解,为后续翻译奠定坚实的基础。
2. **语境翻译:** 根据视频的大纲和摘要,把字幕翻译成目标语言。确保翻译后的字幕既保留原视频的意思和情感,又容易理解。
3. **文化调整:** 针对意译的结果,根据目标语言的文化背景和表达习惯,对翻译文本进行适当调整。这一步骤旨在使翻译后的文本更加自然流畅、易于被目标语言的观众理解和接受。
4. **反思调整:** AI对翻译结果自动评估,检测并修正文化语义偏差、流畅度问题及风格一致性等方面的问题。结合AI模型给出的优化建议,对译文进行必要的迭代改进,确保最终翻译的准确性与可读性。
5. **字幕精校:** 最后对翻译好的字幕进行全面检查,确保字幕与视频同步准确无误,语言表述精准,格式规范统一。任何遗漏或错误都应在此阶段得到纠正,以确保字幕质量的整体提升和良好的观看体验。
#### 七、安装说明
## **第一步:下载绿色版本**
根据你的电脑配置选择对应的版本,绿色版无需安装,解压即可使用。
### **CPU版本**
- **百度网盘**:https://pan.baidu.com/s/1MdKsys8VlxZilt6GwREoYg?pwd=8888`
- **夸克网盘**:https://pan.quark.cn/s/79c7cfd4685e
- **123云盘**(不限速):https://www.123pan.com/s/vLQ9-Ofw4.html
- **天翼云盘**(不限速):https://cloud.189.cn/web/share?code=jye6vif6Vf6r(访问码:dok7)
### **GPU版本**(适合有独立显卡的用户)
- **百度网盘**:https://pan.baidu.com/s/1S50h3-Jcskp-GCVayx0FCQ?pwd=8888
- **夸克网盘**(不限速):https://pan.quark.cn/s/47fd486b7f82
### **模型文件**(建议提前下载)
如果不提前下载,软件运行时会自动下载,但速度可能较慢。
- **百度网盘**:https://pan.baidu.com/s/1aa9FUhkEX46DJ2TWpYUErg?pwd=8888
- **天翼云盘**(不限速):https://cloud.189.cn/t/neQ3y2uMr6Vv (访问码:bi9y)
---
## **第二步:启动软件**
我们的绿色版本不需要安装,操作非常简单:
1. **解压文件**:下载后,将压缩包解压到任意位置。
2. **运行软件**:双击运行 `小牛视频翻译`。
3. **访问界面**:打开浏览器,输入地址:http://127.0.0.1:8181/home
##### 如果您在使用过程中遇到任何困难,请联系作者微信:xiaoniu203040 帮您解决。
<img src="img/wx.png" alt="Description" width="200"/>
## 软件更新历史记录
### 2025年9月07日
- 新增超实用的 声音克隆修改 功能。
1、从视频里挑好声音克隆:
如果你觉得某个角色的声音不好听,可以直接从他在视频里的语句中挑选最“好听”的几句来做克隆。例如:张三在视频里讲了 100 句,你可以挑出最喜欢的 2 句作为样本,系统会用这两句生成更接近你选中声音的效果。
2、用小牛内置的 100 条好声音替换:
我们准备了 100 条优选音色可供选择。如果你不想用视频里的声音(比如李四声音不合适),可以直接从这 100 条里挑一条,替换该角色的声音,操作简单明了。
---
### 2025年9月01日
新增了角色筛选功能 —— 你可以只看某个角色的发言视频,快速定位并检查角色有没有标错。有了这个功能,修正角色的工作效率比以前提效10倍以上!
---
### 2025年8月27日
字幕配音,支持多角色自动识别,自动配音,也可以自动克隆音色配音。
---
### 2025年8月17日
增加了字幕的免费字体、字体颜色、字体大小、字体高度、描边粗细、描边颜色等6大字幕样式功能
---
### 2025年7月27日
上线「上传字幕」功能,跳过转写直达成片,省时省钱更省心:
1. 一键导入字幕+视频,即可直接翻译或配音;
2. 全流程跳过“视频转写”环节,节省时间与转录成本;
3. 生成视频自动同步“视频+音频+字幕+画面+嘴型+音速”,六维一致性一步到位。
---
### 2025年7月24日
新增声音克隆功能,支持 IndexTTS 和 cosyvoice-v2 两大主流模型,语音生成更自然、更智能:
1. 支持情感克隆配音,可还原说话人的语气、语调与情绪,实现更真实、生动的语音表现;
2. 支持最多100个角色,实现全自动克隆配音,特别适用于多角色短剧的语音生成需求。
---
### 2025年7月14日
解决视频翻译中,AI生成视频后,音频和人物动作不协调问题;例如演讲发布会上,演讲人的声音和手动作 协调一致性问题。
---
### 2025年7月8日
发布了“画面 + 音频 + 字幕 + 嘴型 + 音速” 对齐的稳定版本202500708。
相比上一版本,该版本的对齐准确率提升了 50%。
特别感谢过去 40 天内200+ 位用户的积极反馈与建议,很多用户甚至将翻译效果不佳的视频发给我进行研究与优化,真的非常感谢大家的信任与支持!
---
### 2025年6月16日
- 《基于 AI 的嘴型与画面驱动音频速率调整,最终保证 视频+音频+字幕+画面+嘴型+音速的一致性。》
1、基于嘴型与画面节奏的音频动态速率调整
引入智能嘴型识别与画面帧速分析技术,自动调整音频播放速率,使语音内容与人物口型、动作节奏精准匹配,从而提升视听一致性和沉浸感。
背后应用了基于深度学习的嘴型识别模型与节奏分析算法,实现了更自然、智能的音画对齐。
2、解决视频“画面慢动作”问题
本次通过引入多模态融合模型,融合嘴型识别、人物动作轨迹分析与音频节奏建模,优先保留视频原始节奏结构,并通过 AI 驱动的音频速率动态重建,完成时长对齐,彻底避免画面被强行减速的情况,确保整体画面流畅自然。
---
### 2025年5月30日
- 重构打磨《小牛翻译5步法》(1.理解核心 2.语境翻译 3.文化调整 4.反思调整 5.字幕精校),翻译准确性提升15%; 用心打磨产品,请小伙伴们享用。
---
### 2025年5月18日
- 节约你的成本:本次主要是优化配音,在保证音质的条件下,节约配音成本。比以前能节约50%的成本吧。
---
### 2025年5月05日
- 解决翻译丢字幕问题。增加翻译葡萄牙语种。
---
### 2025年4月20日
- 本次版本升级针对搭载 Apple 芯片(如 M1、M2、M3、M4 系列)的 Mac 设备进行了深度优化。通过全面支持 Apple 芯片的 Metal Performance Shaders(MPS)加速能力,性能相比此前版本提升高达 7倍,大幅提升运行效率与响应速度,充分发挥 Apple 硬件的计算优势。另外也 解决了mac的一些报错bug.
---
### 2025年4月7日
- 视频出海更容易了!中文语音转字幕准确率达99%,支持23种方言: 小牛已经支持23种方言(上海话、四川话、武汉话、贵阳话、昆明话、西安话、郑州话、太原话、兰州话、银川话、西宁话、南京话、合肥话、南昌话、长沙话、苏州话、杭州话、济南话、天津话、石家庄话、黑龙江话、吉林话、辽宁话);
---
### 2025年3月31日
- 优化翻译和播音性能,速度在原来的基础上能提升5-10倍。
---
### 2025年3月17日
- 新增了翻译万能接口,支持所有遵循通用OpenAI兼容接口格式的翻译服务接入,只需提供apiKey、baseURL和model参数即可实现无缝对接。
- 持续优化小牛翻译五步法(理解核心、语境翻译、文化调整、反思调整、字幕精校),确保翻译结果更加准确自然,贴合实际使用场景。
---
### 2025年3月10日
- 接入DeepSeek翻译模型,翻译更加精准。
- 继续优化了小牛翻译5步法(理解核心、语境翻译、文化调整、反思调整、字幕精校),翻译更加精准。
---
### 2025年2月20日
- 解决了YouTube限制下载问题。
---
### 2025年1月24日
- **字幕翻译准确率提升10倍**
- 自研AI字幕翻译模型,基于100万部视频字幕数据进行深度训练,采用最新的 Transformer 架构,显著提升字幕翻译的语义理解与精准表达能力。
- 引入上下文感知机制,能够根据视频内容动态调整翻译结果,确保翻译语言更加符合语境逻辑。
- 应用多语言对齐技术,优化语言间的细节翻译误差,特别针对短语、省略句等复杂结构实现更高准确率。
- 加速推理效率,翻译处理时间缩短 30%,为用户提供更快速的字幕翻译服务。
---
### 2025年1月11日
#### 全面升级视频双字幕功能
为了提升多语言视频的制作和观赏体验,本次我们对双字幕功能进行了关键升级,特别在**精准的双字幕同步**和**翻译精准度**方面做出了显著提升。以下是本次升级的重点:
1. **精准的双字幕同步**
- 新版本优化了字幕与视频的同步机制,确保两种语言字幕始终保持精准同步,避免错位或延迟。
- 增强的字幕时间轴校准工具,支持用户手动微调字幕的时间轴,保证每一句话的字幕与语音精准对接,适用于高精度需求的视频内容。
2. **翻译精准度提升**
- 升级后的翻译引擎显著提高了翻译的准确性,特别是在处理专业术语和上下文语义时,能够更好地保留原意。
- 结合上下文分析,智能优化翻译内容,避免传统翻译中的歧义,确保翻译结果自然流畅,更符合目标语言的表达习惯。
- 支持更多语言对的精准翻译,满足国际化内容的需求,确保双语字幕在全球观众中的高质量呈现。
本次升级在同步精准度和翻译质量方面的提升,将大大增强视频内容的跨语言传播效果,满足更高标准的专业视频翻译需求。
---
### 2024年12月29日
- **美化字幕,按语义切割字幕**
- 新增智能语义分析功能,将字幕按语义进行切割,每条字幕不超过30字;解决了字幕过长、不合理断句、无标点符号的问题,让字幕更加清晰易读。
- 优化断句规则,自动添加适当标点符号,提升字幕的流畅度和阅读体验。
- 引入动态长度检测算法,确保字幕在多种语言下均能保持视觉友好,不影响用户观看体验。
-
### 2024年12月3日
- **优化字幕翻译对照体验**
- 改进了字幕翻译界面,提供更清晰、易读的对照展示方式;让用户可以快速查看原文与翻译的对照内容,提升翻译体验。
- 引入智能排版算法,使翻译内容更加整洁、对齐,避免字幕重叠或显示错乱。
- **修复翻译过程中系统卡死Bug,提升系统稳定性**
- 修复了在字幕翻译过程中系统偶尔卡死的问题,优化了后台处理流程,确保翻译过程更加流畅。
- 增强了多线程处理能力,优化了内存管理,有效避免翻译任务长时间运行时导致的系统崩溃或卡顿现象。
- 系统响应速度更快,翻译任务完成更加高效,提升用户体验。
---
### 2024年11月25日
- **新增字幕配音功能**
- 支持通过字幕文件直接生成语音。
- 集成微软TTS、字节跳动火山语音及真人ChatTTS三大语音技术,提供自然流畅的声音体验。
- 适用于课程讲解、宣传视频、有声读物等多种场景,为您的创作提供全新可能!
- **新增中英文双字幕功能**
- 同时支持中文和英文双语字幕的自动生成与同步。
- 字幕风格灵活可调,支持中英对照模式和分行显示。
- 打造更加国际化的视频内容,让您的作品更具吸引力!
---
### 2024年11月18日
- **新增文本配音功能**
我们新增了文本配音功能,现在您可以轻松将文字转换成语音。这个功能使用了微软TTS、字节跳动火山语音和真人ChatTTS三种先进的AI语音技术。
只需输入或粘贴文字,选择喜欢的声音类型,就能生成高质量的语音。无论是做课程讲解、有声书,都能让您的工作更高效、更方便。
---
### 2024年11月15日
- **接入新的语音助手——ChatTTS,增加了40种真人声音**
我们在最新的版本中加入AI语音技术叫做ChatTTS,这让我们的应用能够提供更加自然、真实的语音播放效果。这次更新,我们特别添加了40种新的AI声音选项,这些声音都是根据真实人的声音制作出来的,听起来就像真人说话一样自然。这意味着,无论你是想做教学视频、朗读小说,还是制作新闻播报,都可以从这40种新声音中挑选最适合的一种来使用,让你的作品听起来更加生动有趣。
---
### 2024年11月05日
- **接入头条火山语音引擎,新增80个智能AI播音员**
此次更新接入了头条火山语音引擎,进一步丰富了语音合成功能,新增了80个智能AI播音员。用户现在可以根据需要选择不同的语音风格和语调,使视频内容的表达更加生动、个性化。无论是新闻解说、广告配音还是故事讲述,都能找到最合适的声音配合,提升整体的视听效果。
---
### 2024年9月22日
- **新增本地视频翻译功能,持续优化字幕翻译体验**
在此次更新中,正式上线本地视频翻译功能,提升用户处理本地视频的效率和灵活性。
主要更新内容:
本地视频翻译功能:支持用户上传本地视频进行翻译,进一步满足了用户对非在线内容处理的需求。
字幕翻译优化:在已有的字幕翻译和手工校对功能基础上,本次更新优化了字幕生成的准确度和编辑便捷性。
版本更新背景:
自从9月初推出全新的可视化Web UI系统后,用户对我们的视频翻译和字幕校对功能反响热烈。许多用户表示,希望进一步支持本地视频文件的翻译需求。因此,经过技术团队的努力,我们很高兴能够在此次更新中满足这一需求,继续优化您在使用时的整体体验。
---
### 2024年9月2日
- **全新可视化Web UI系统,支持手工校对字幕和翻译语音功能**
此次更新带来了全新的可视化Web UI系统,大幅提升了用户体验和操作便捷性。新增功能包括视频管理、视频翻译添加、字幕翻译管理,以及手工校对字幕等。
版本更新背景:近半年来,我们收到许多用户反馈,大家希望在AI翻译字幕的基础上,能够进一步手动修改和优化字幕内容。同时,用户还希望能够在视频中添加个人见解,进行个性化的视频解说。
为了满足这些需求,本次版本特别推出了手工校对字幕和翻译语音功能。现在,用户可以在观看视频时,实时修改字幕内容和视频语音,充分发挥创意,使视频内容更加符合个人表达意图。这一功能的加入,将帮助您更高效地优化视频质量,为受众提供更加丰富的观看体验。
---
### 2024年8月13日
- **引入双AI模型策略,翻译精准度的显著飞跃**
在过去的几个月中,我们收到了许多用户关于翻译结果精准度的反馈。尽管我们之前采用了机器翻译和ChatGPT技术,但实际应用中的翻译效果并未完全达到用户的期望。
经过深入的技术分析,我们认识到单一AI模型在提升翻译质量方面存在局限。为了解决这一问题,我们采用了两个模型同时进行翻译,这一策略显著提高了翻译的准确性。
- **本次更新,我们新增了两个AI翻译大模型,以提供更高质量的翻译服务:**
**1、初级-Gemma模型:** 专为视频解说类翻译设计,准确率可达90%。Gemma是由谷歌发布的大规模语言模型,擅长生成高质量的翻译文本。我们还对其进行了针对翻译场景的模型微调,以进一步提升翻译效果。
**2、高级-双模型(Kimi+Gemma):** 结合了Kimi和Gemma两个模型的优势,针对视频解说类内容,翻译准确率可达到98%。Kimi作为国内知名的AI大模型,与Gemma模型的结合,为翻译效果带来了质的飞跃。
我们相信,这次技术的更新和模型的升级将为您带来前所未有的翻译体验。期待您的反馈和建议,以帮助我们不断优化服务。
---
### 2024年7月25日
- **引入了先进的音质提纯技术,为您带来更加清晰和震撼的听觉体验:**
事情是这样的:三个月前,一些用户向我们反映音质不好,杂音和嘈杂声太多。前期苦于技术的限制,但我们历经3个月技术攻克,最终成功突破。
- **因此,在本版本中我们增加了5个音质选项:**
1. 去除背景音:只保留人声,没有原始背景音乐。
2. 普通音质:采用普通音频提纯方法,提取音质速度快,10分钟的视频大约只需30秒。
3. 普通音质-去噪音:在普通音质的基础上,使用15种算法去除噪音,例如国际上最好的WaveNet。
4. 顶级音质:采用顶级音频提纯方法,使用国际上最好的Wav2Vec、Conv-TasNet、D3Net、SEGAN 等技术;提取音质速度较慢,10分钟的视频大约需要10分钟。
5. 顶级音质-去噪音:在顶级音质的基础上,使用15种算法进一步去除噪音。
---
### 2024年7月10日
- **精准同步声音与说话口型**:通过深度学习算法分析视频内容,实现智能配音,精准匹配声音与视频中的说话口型。AI技术极大提升了视频观看体验,使配音更加自然和逼真,广泛应用于电影、电视剧、企业宣传等领域。
---
### 2024年5月20日
- **新增ChatGPT翻译**:普通翻译工具(如谷歌翻译、百度翻译)常导致语义不通、上下文僵硬等问题。借助ChatGPT大模型翻译,可以彻底解决这些问题,提供更加流畅和自然的翻译效果。
---
### 2024年4月10日
- **Y新增视频裁剪**:提供精准的时间段选择和剪辑功能,让您根据需求对视频进行裁剪(去头去尾)和编辑。
---
### 2024年3月10日
- **分离语音中的人声和背景音乐**:利用AI技术将视频中的人声和背景音乐分离,并将人声翻译成中文,同时保留背景音乐,使得视频更易理解和欣赏。
---
### 2024年2月20日
- **新增视频翻译**:YouTube上的视频多为外国语言,国人难以理解。为此,我们增加了语言翻译功能,将全球视频内容翻译为中文语音,方便用户观看。
---
### 2024年1月10日
- **新增字幕翻译**:支持下载视频字幕并自动翻译为中文字幕,提升用户的观看体验。
---
### 2023年12月15日
- **新增YouTube视频管理**:针对YouTube视频数量庞大且难以管理的问题,我们增加了Excel管理视频功能,使视频管理更加高效便捷。
---
### 2023年10月5日
- **YouTube下载**:支持YouTube视频的自动化下载,方便用户离线观看。
|
stanfordnlp/stanza-hbo
|
stanfordnlp
| 2025-09-11T15:32:08Z | 2 | 0 |
stanza
|
[
"stanza",
"token-classification",
"hbo",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2023-08-16T23:53:01Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: hbo
license: apache-2.0
---
# Stanza model for Ancient_Hebrew (hbo)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:31:59.489
|
stanfordnlp/stanza-grc
|
stanfordnlp
| 2025-09-11T15:31:49Z | 54 | 2 |
stanza
|
[
"stanza",
"token-classification",
"grc",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: grc
license: apache-2.0
---
# Stanza model for Ancient_Greek (grc)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:31:29.223
|
stanfordnlp/stanza-gl
|
stanfordnlp
| 2025-09-11T15:31:21Z | 9 | 0 |
stanza
|
[
"stanza",
"token-classification",
"gl",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: gl
license: apache-2.0
---
# Stanza model for Galician (gl)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:31:11.664
|
stanfordnlp/stanza-fr
|
stanfordnlp
| 2025-09-11T15:30:42Z | 3,097 | 2 |
stanza
|
[
"stanza",
"token-classification",
"fr",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: fr
license: apache-2.0
---
# Stanza model for French (fr)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:29:07.415
|
stanfordnlp/stanza-fi
|
stanfordnlp
| 2025-09-11T15:28:59Z | 110 | 2 |
stanza
|
[
"stanza",
"token-classification",
"fi",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: fi
license: apache-2.0
---
# Stanza model for Finnish (fi)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:28:22.839
|
stanfordnlp/stanza-et
|
stanfordnlp
| 2025-09-11T15:27:38Z | 128 | 0 |
stanza
|
[
"stanza",
"token-classification",
"et",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: et
license: apache-2.0
---
# Stanza model for Estonian (et)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:27:25.114
|
stanfordnlp/stanza-es
|
stanfordnlp
| 2025-09-11T15:27:24Z | 3,725 | 1 |
stanza
|
[
"stanza",
"token-classification",
"es",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: es
license: apache-2.0
---
# Stanza model for Spanish (es)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:26:14.792
|
Warlock700/Terechuk-Bandits
|
Warlock700
| 2025-09-11T15:26:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-11T15:26:05Z |
---
license: apache-2.0
---
|
Warlock700/Vilkov-Dolg_gasmask
|
Warlock700
| 2025-09-11T15:25:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-11T15:24:36Z |
---
license: apache-2.0
---
|
WenFengg/ExpertWed11_wen14_number29
|
WenFengg
| 2025-09-11T15:23:31Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-11T15:22:50Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
aruboi/llama-32-11b-vlm_peft_output33
|
aruboi
| 2025-09-11T15:23:01Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-11B-Vision",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision",
"license:llama3.2",
"region:us"
] | null | 2025-09-09T10:59:54Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-11B-Vision
tags:
- generated_from_trainer
model-index:
- name: llama-32-11b-vlm_peft_output33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-32-11b-vlm_peft_output33
This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8624 | 0.1 | 5 | 1.6047 |
| 1.601 | 0.2 | 10 | 1.5272 |
| 2.0777 | 0.3 | 15 | 1.4645 |
| 1.4 | 0.4 | 20 | 1.4354 |
| 1.4826 | 0.5 | 25 | 1.4125 |
| 1.2923 | 0.6 | 30 | 1.3968 |
| 1.6326 | 0.7 | 35 | 1.3812 |
| 1.3288 | 0.8 | 40 | 1.3678 |
| 1.4135 | 0.9 | 45 | 1.3609 |
| 1.4872 | 1.0 | 50 | 1.3556 |
| 1.1469 | 1.1 | 55 | 1.3510 |
| 1.3454 | 1.2 | 60 | 1.3513 |
| 1.3486 | 1.3 | 65 | 1.3473 |
| 1.0481 | 1.4 | 70 | 1.3407 |
| 1.1458 | 1.5 | 75 | 1.3401 |
| 1.3987 | 1.6 | 80 | 1.3409 |
| 1.3974 | 1.7 | 85 | 1.3367 |
| 1.3185 | 1.8 | 90 | 1.3350 |
| 1.1629 | 1.9 | 95 | 1.3298 |
| 1.1185 | 2.0 | 100 | 1.3310 |
| 1.105 | 2.1 | 105 | 1.3357 |
| 1.0332 | 2.2 | 110 | 1.3472 |
| 1.0447 | 2.3 | 115 | 1.3559 |
| 0.7757 | 2.4 | 120 | 1.3452 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.48.3
- Pytorch 2.7.0a0+7c8ec84dab.nv25.03
- Datasets 3.5.0
- Tokenizers 0.21.1
|
gf43hhd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
|
gf43hhd
| 2025-09-11T15:23:00Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am armored zealous giraffe",
"unsloth",
"trl",
"genrl-swarm",
"I am armored_zealous_giraffe",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-19T21:04:10Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am armored zealous giraffe
- unsloth
- trl
- genrl-swarm
- I am armored_zealous_giraffe
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gf43hhd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Warlock700/Maluha-Stalker-newbye
|
Warlock700
| 2025-09-11T15:22:07Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-11T15:18:24Z |
---
license: apache-2.0
---
|
giovannidemuri/llama8b-v00-jb-seed2-alpaca_lora
|
giovannidemuri
| 2025-09-11T15:21:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T14:53:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stanfordnlp/stanza-da
|
stanfordnlp
| 2025-09-11T15:21:30Z | 188 | 0 |
stanza
|
[
"stanza",
"token-classification",
"da",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: da
license: apache-2.0
---
# Stanza model for Danish (da)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:21:09.596
|
kokterp/blockassist-bc-subtle_bold_chicken_1757604043
|
kokterp
| 2025-09-11T15:21:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"subtle bold chicken",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:20:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- subtle bold chicken
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stanfordnlp/stanza-cy
|
stanfordnlp
| 2025-09-11T15:21:08Z | 12 | 1 |
stanza
|
[
"stanza",
"token-classification",
"cy",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: cy
license: apache-2.0
---
# Stanza model for Welsh (cy)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:21:00.312
|
yoppertiu/blockassist-bc-whistling_wise_llama_1757604036
|
yoppertiu
| 2025-09-11T15:21:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling wise llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:20:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling wise llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rootu/blockassist
|
Rootu
| 2025-09-11T15:21:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T23:47:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Speedevs/blockassist
|
Speedevs
| 2025-09-11T15:21:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tropical hunting mink",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:45:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tropical hunting mink
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_456_1757596096
|
rbelanec
| 2025-09-11T15:21:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:30:54Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_cola_456_1757596096
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_456_1757596096
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1780
- Num Input Tokens Seen: 6925896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 456
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.273 | 1.0 | 3848 | 0.2536 | 346216 |
| 0.2121 | 2.0 | 7696 | 0.2523 | 692896 |
| 0.2276 | 3.0 | 11544 | 0.2543 | 1039432 |
| 0.1208 | 4.0 | 15392 | 0.2731 | 1385744 |
| 0.1824 | 5.0 | 19240 | 0.2530 | 1732008 |
| 0.2493 | 6.0 | 23088 | 0.2525 | 2078472 |
| 0.1641 | 7.0 | 26936 | 0.2586 | 2425080 |
| 0.227 | 8.0 | 30784 | 0.2558 | 2771400 |
| 0.2494 | 9.0 | 34632 | 0.2570 | 3117888 |
| 0.2 | 10.0 | 38480 | 0.2596 | 3463936 |
| 0.4975 | 11.0 | 42328 | 0.2685 | 3809992 |
| 0.3292 | 12.0 | 46176 | 0.3268 | 4156000 |
| 0.3055 | 13.0 | 50024 | 0.3519 | 4502216 |
| 0.3318 | 14.0 | 53872 | 0.3920 | 4848576 |
| 0.0367 | 15.0 | 57720 | 0.4937 | 5194888 |
| 0.1322 | 16.0 | 61568 | 0.5821 | 5541144 |
| 0.153 | 17.0 | 65416 | 0.8123 | 5887264 |
| 0.0056 | 18.0 | 69264 | 1.0105 | 6233416 |
| 0.1486 | 19.0 | 73112 | 1.1386 | 6579848 |
| 0.2454 | 20.0 | 76960 | 1.1780 | 6925896 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
stanfordnlp/stanza-cs
|
stanfordnlp
| 2025-09-11T15:20:53Z | 2,652 | 0 |
stanza
|
[
"stanza",
"token-classification",
"cs",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: cs
license: apache-2.0
---
# Stanza model for Czech (cs)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:20:32.778
|
burhansjohnny/blockassist-bc-dappled_raging_yak_1757604019
|
burhansjohnny
| 2025-09-11T15:20:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dappled raging yak",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:20:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dappled raging yak
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stanfordnlp/stanza-cop
|
stanfordnlp
| 2025-09-11T15:20:31Z | 3 | 0 |
stanza
|
[
"stanza",
"token-classification",
"cop",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: cop
license: apache-2.0
---
# Stanza model for Coptic (cop)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:20:25.276
|
stanfordnlp/stanza-ca
|
stanfordnlp
| 2025-09-11T15:20:24Z | 49 | 0 |
stanza
|
[
"stanza",
"token-classification",
"ca",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: ca
license: apache-2.0
---
# Stanza model for Catalan (ca)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:20:14.339
|
Shirai69/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vicious_yawning_bat
|
Shirai69
| 2025-09-11T15:20:21Z | 196 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vicious_yawning_bat",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-28T08:40:03Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vicious_yawning_bat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
muooon/EmoNAVI
|
muooon
| 2025-09-11T15:20:09Z | 0 | 0 | null |
[
"optimizer",
"adaptive-optimizer",
"emotion-ai",
"shadow-learning",
"deep-learning",
"meta-learning",
"adaptive-algorithms",
"stability-analysis",
"en",
"ja",
"license:apache-2.0",
"region:us"
] | null | 2025-07-06T08:36:05Z |
---
license: apache-2.0
language:
- en
- ja
model_type: optimizer
tags:
- optimizer
- adaptive-optimizer
- emotion-ai
- shadow-learning
- deep-learning
- meta-learning
- adaptive-algorithms
- stability-analysis
---
**自動収束・自己制御・自律型 オプティマイザです**
**Auto-convergence, self-control, autonomous optimizer**
#### ユーザーと研究者へ/このリンクを読んでください/please click!
[ユーザーと研究者へ/このリンクを読んでください/please click!](https://huggingface.co/muooon/EmoNAVI/raw/main/report-emoment.txt)
emonavi挙動まとめ(日本語のみ)
[emonavi挙動まとめ(日本語のみ)](https://huggingface.co/muooon/EmoNAVI/raw/main/report/emonavi%E6%8C%99%E5%8B%95%E3%81%BE%E3%81%A8%E3%82%81.txt)
Gemini に見せていろいろ聞いてみました
[Geminiに聞いてみた](https://huggingface.co/muooon/EmoNAVI/blob/main/Hug-Gemini-analysis(JPN).md)
[Geminiに聞いてみた-02(日本語のみ)](https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-Gemini-analysis(2)(JPN).txt)
I showed it to Gemini and asked her a few questions.
02 is only in Japanese - please translate by yourself.
[asked Gemini](https://huggingface.co/muooon/EmoNAVI/blob/main/Hug-Gemini-analysis(ENG).md)
|★| EmoNAVI、FACT、LYNX、CLAN、ZEAL、NECO、v3.0 (250825) emosens(第2世代)で解明した"高次moment"(近似)のフィードバックを適用(更新) 全て "shadow=False" です
|★| EmoNAVI, FACT, LYNX, CLAN, ZEAL, NECO, updated to v3.0 (250825), Incorporates (updates) feedback on “higher moments” (approximations) clarified by emosens (2nd generation). All are “shadow=False”
|★| EmoNAVI、FACT、LYNX、CLAN、ZEAL、NECO、v2.0 (250815) 更新、shadow-system の精密化(更新)
|★| EmoNAVI, FACT, LYNX, CLAN, ZEAL, NECO, updated to v2.0 (250815), refinement of shadow-system (update)
|★| 第2世代を公開(250801)しました。 emonavi は、新しい世代へ進化し軽量化を果たします
|★| The 2nd gen was release(250801) emonavi has evolved into a new generation and become more lightweight.
|★| https://github.com/muooon/EmoSens
|★| clan、zeal、neco、は、shadow機能の on/off 切替えをできるようにしました
|★| clan, zeal, and neco are now able to switch the shadow function on and off.
|★| 大変光栄なことに Pytorch-optimizer 3.7.0 へ登録されたとのこと (250728) 関係者の皆さまに深く感謝します
|★| We are very honored to have been registered in Pytorch-optimizer 3.7.0. We would like to express our deepest gratitude to everyone involved.
|★| 疑似DDPシミュレーションを試したい方(Those DDP simulation) →
[DDP-TEST](https://huggingface.co/muooon/EmoNAVI/blob/main/ddp-test.zip)
|★| EmoFACT 公開(250716) NAVIに比べ、約1GB節約(SDXL) 感情機構は同じです
|★| EmoFACT released (250716) Saves about VRAM1GB (SDXL) compared to NAVI. Emotion mechanism is the same.
|★| EmoLYNX 公開(250718) 探索範囲を広く持ちます 感情機構は同じです
|★| EmoLYNX Released (250718): It offers a wide exploration range, while its Emotion Mechanism remains the same.
|★| EmoCLAN 公開(250720) Navi、Fact、Lynx、役割分担の統合 感情機構は同じです
(Lynx:序盤と過学習傾向時、Navi:中盤と健全時、Fact:終盤と発散傾向時、を担当します)
|★| EmoCLAN Open (250720) Navi, Fact, Lynx, role integration Emotional mechanism is the same
(Lynx: in charge of the early stage and overlearning tendency, Navi: in charge of the middle stage and soundness, Fact: in charge of the end stage and divergence tendency)
# 主題:新世代optimizer、EmoNAVIによる変革と感情学習の成果
## Title: A New Generation Optimizer — The Innovations and Outcomes of Emotional Learning with EmoNAVI
## 副題:過去値不要で現在値から再開できる自動収束・自己制御・自律型軽量最適器の解説
### Subtitle: A Lightweight, Self-Regulating, Autonomous Optimizer That Automatically Converges and Resumes from the Present Without Relying on Past Values
## テーマ:既存のoptimizerにないものをつくる、出来たのはニューロンスパイクの再発明でした。
### Theme: Creating What Existing Optimizers Lack — A Reinvention of Neuronal Spiking
## 序論:
現在主流のoptimizerはさまざまに改良され簡易化を進めています、しかし依然として、
学習再開、スケジューリング、学習状態の記録や復元、等について調整の難しさや煩雑さは存在しています、
面倒なパラメータに依存せず、それらを解決する新しいアプローチを見つけたのでここで紹介します。
## Introduction
Mainstream optimizers have undergone significant improvements and simplifications in recent years.
However, they still face practical challenges in areas such as resuming training, scheduling updates, and managing the recording and restoration of learning states.
These issues often require tedious parameter adjustments and ad hoc workarounds.
In this paper, we introduce a new approach that addresses these problems without relying on cumbersome parameter configurations.
## 本論:
今回ここで紹介するのは新世代のoptimizerです、
EMA的平滑化の概念を下地にし、独自に構築した感情的"EMA&スカラー"を中心にした"感情機構"という新しい仕組みを実現しました、
この"感情機構"は、EMA的発想を再解釈・独自拡張することで得られた新しい機構です。
EmoNAVIの独立性と革新性を紹介します。
## Main Section
In this paper, we present a new generation of optimizer.
Built upon the foundation of EMA (Exponential Moving Average) smoothing, we have developed a novel mechanism called the "emotional mechanism," which centers around a unique combination of EMA and scalar dynamics.
This mechanism was created by reinterpreting and independently extending the conventional EMA concept.
Here, we introduce EmoNAVI—an optimizer characterized by its innovation and independence.
最初に"感情機構"と名付けた経緯と理由を記します。
生物のもつ「感情」とは、知覚と記憶の差異に基づく行動のトリガです、同様にEmoNAVIも現在と過去の差分に基づき学習の"行動"を制御する仕組みとして設計されています。
そして"感情機構"と名付けた理由のもうひとつは、この一連の動作がまるでニューロンスパイクのような動作をするからです。
この機構"感情機構"の動作を明快にした読み物、本稿末尾に記すリンク先の擬人化を読むことで簡単にご理解頂けると思います。
First, let us explain the background and reasoning behind the term “Emotion Mechanism.”
In biological systems, emotions are often understood as triggers for action based on discrepancies between perception and memory.
EmoNAVI was similarly designed to control its learning “behavior” by responding to differences between the present and the past.
Another reason we chose the term “Emotion Mechanism” is that its operation closely resembles neuronal spiking behavior.
For a more intuitive understanding of how this mechanism works, we encourage you to read the personification linked at the end of this article.
次に、"感情機構"の構成を記します、
感情機構とは、2つのEMA、スカラー、Shadow、により構成されます。
Next, we outline the structure of the “Emotion Mechanism.”
This mechanism consists of two EMAs, a scalar value, and a shadow component.
まず2つのEMAによる"感情EMA"について説明します、
2つのEMAで構成します、短期型と長期型です、この2つのEMAはLossを監視し判断材料を得ます、
1つめ、短期型EMAは瞬間的なシグナル(緊張)を受け持ちます 2つめ、長期型EMAは平均した過去のシグナル(安静)を受け持ちます、
この2つのEMAは次に紹介する"感情スカラー"へそれぞれの持つ判断材料を渡します
First, we describe the "Emotional EMA," which consists of two components: a short-term EMA and a long-term EMA.
These two EMAs continuously monitor the loss value and serve as the basis for subsequent decision-making.
The short-term EMA captures rapid, momentary signals (interpreted as “tension”), while the long-term EMA reflects more averaged, historical trends (“calm”).
Both EMAs pass their respective signals to the "Emotion Scalar," which will be introduced in the next section.
次に、"感情スカラー"について説明します、
前述の"感情EMA"からの信号をスカラー値に変換します、スカラー値の変化は、これら2つのEMAの差分により常に動的変化を続けます、
"感情スカラー"はoptimizerにより書き換えた学習結果の是非を判定し、
"スカラー値が一定閾値を超えたときのみ"次に紹介するShadowの配合を決めます
Next, we introduce the "Emotion Scalar."
It converts the signals from the previously described Emotional EMA into a scalar value, which continuously changes in response to the difference between the short-term and long-term EMA.
This scalar dynamically evaluates whether the learning update performed by the optimizer should be considered appropriate.
Only when the scalar exceeds a certain threshold does it trigger the next step: determining how much of the "Shadow" should be blended into the learning parameters.
次に、Shadowについて説明します、
Shadowは学習開始直後にShadowとして保存され維持されます、このShadowは"過去の穏やかな状態"の記憶です、この情報は感情機構に追従しながらゆっくりと変化し続けます、
そして"感情スカラー"に応じ決められたratioで学習結果にブレンドとして反映されます、このブレンドの配合率も感情機構により動的に変化し続けます、
Next, we describe the "Shadow."
At the beginning of training, a copy of the current parameters is saved and maintained as the Shadow.
This Shadow represents a memory of past calm states, and it evolves slowly over time, following the guidance of the Emotion Mechanism.
When the Emotion Scalar exceeds a certain threshold, a dynamic blend ratio is computed.
This ratio determines how much of the Shadow is mixed into the current parameters.
The blend ratio itself is also dynamically adjusted by the Emotion Mechanism in response to ongoing learning behavior.
ここまで"感情機構"の構成と役割りを説明しました、続いて"感情機構"の動作機序を見ていきましょう。
まずoptimizerの学習結果が記録されます、この時"感情機構"は緊張と安静の差分情報で書き換えの是非を判定します、
この判定により、過度の学習と判断した場合は、過去の適切な状態をブレンドすることでノイズや暴走を抑制します、
適切な学習と判断した場合は、過去をブレンドしない選択をします、これをstep毎に行います、
Now that we have explained the structure and role of the Emotion Mechanism, let us examine how it operates.
At each training step, the optimizer's updated parameters are recorded.
The Emotion Mechanism then evaluates whether these updates are appropriate, based on the difference between short-term “tension” and long-term “calm” signals.
If the mechanism determines that the update reflects excessive learning, it suppresses potential noise or instability by blending in a suitable portion of the past stable state (Shadow).
Conversely, if the update is deemed appropriate, the mechanism chooses not to apply blending.
This evaluation and adjustment are performed dynamically at each training step.
さらに、この判定では"信頼度"の評価をします、"感情スカラー"が一時的に大きく振れるだけでは不十分であり「この変化が本当に意味のあるものかどうか」を見極めて混合の是非を判断します。
そのため、学習の**序盤では長期の安静シグナルの蓄積が少なく信頼に値しないため混合が発動しづらく**、**終盤では短期の緊張シグナルが収束しスカラー自体が閾値に届かず動作しません**。
(学習の序盤では判定基準の過去シグナルが少ないため動作しませんし、終盤では瞬間シグナルが少ないため動作しません)
このように、EmoNAVIの"感情機構"は、単なる閾値反応ではなく「揺らぎに対する信頼ある変化のみを察知して反応する」慎重な意思決定を行います。
In addition, this decision-making process includes an evaluation of "reliability."
It is not sufficient for the Emotion Scalar to simply spike temporarily; the mechanism assesses whether the fluctuation truly represents a meaningful change before deciding whether blending should occur.
As a result, in the **early stages of learning**, blending is unlikely to be triggered because the long-term “calm” signal has not yet accumulated enough history to be trustworthy.
In the **later stages**, on the other hand, the short-term “tension” signal tends to converge, and the scalar itself fails to exceed the threshold—thus the mechanism remains inactive.
(In short: the mechanism tends not to activate in the early stages due to insufficient past signal for evaluation, and in the later stages due to lack of strong instantaneous signal.)
In this way, EmoNAVI’s Emotion Mechanism does not respond merely to raw thresholds, but instead performs cautious decision-making—reacting only to fluctuations that it has learned to trust.
この一連の動作により学習時の過敏な反応を弛緩し不要なノイズ等を覚えないように制御します。
つまりoptimizer本来の学習率やベクトルを直接的に制御せず、感情機構の変化に応じ安定したパラメータになるよう後から調整する、
こういう流れになります。すべてを書き戻さずあくまで配合率に応じてブレンドするので学習の更新は止まらず進行は維持されます。
This series of actions helps relax hypersensitive reactions during learning and prevents the optimizer from overfitting to unnecessary noise.
Rather than directly manipulating the optimizer’s learning rate or update vectors, the system instead applies corrective blending afterward—adapting parameters in response to changes detected by the Emotion Mechanism.
Because it blends adjustments based on a calculated ratio rather than fully overwriting parameter values, the learning process continues smoothly without interruption.
### 感情機構の動作とスカラー変遷(学習フェーズ別の結果的挙動)
| フェーズ | 状況(Loss変化) | EMAの挙動 | スカラーの変動傾向 | Shadow混合の実動作 | 感情機構としての意味ある挙動 |
|----------|-----------------------|------------------------------------|--------------------------|--------------------------|--------------------------------------------|
| 序盤 | 不安定・高め | Shortは鋭敏、Longは未成熟 | 大きく変動することもある | ほとんど発動しない | 判定に十分な履歴がなく、実質的に動作不可 |
| 中盤 | 徐々に収束傾向 | 両EMAが意味ある差分を持つようになる | 適度な振幅で安定推移 | 条件付きで発動する | 状態に応じてブレンド補正が有効に機能 |
| 終盤 | 収束・微振動 | Short ≒ Long(差分がほぼ消失) | 小さく収束 | 発動しなくなる | 静けさの合図:should_stop 条件が整う |
備考:
- スカラー値は常に tanh(5 * (short - long)) で生成されます
- 閾値:abs(scalar) > 0.3 で配合が始まり、> 0.6 で大きな混合比率(0.7以上)に
- Shadow混合はパラメータそのものを書き戻すのではなく、部分的に配合して“追従”させる設計です
- 感情スカラーの減衰=学習の「静穏化」→ 終盤に向けて should_stop の発火条件が整います
### Emotional Mechanism Behavior and Scalar Transitions (Outcome-Based Behavior by Learning Phase)
| Phase | Loss Characteristics | EMA Behavior | Scalar Fluctuation Pattern | Actual Shadow Blending | Meaningful Behavior of Emotion Mechanism |
|-----------|----------------------------|-------------------------------------------|------------------------------------|-------------------------------|-------------------------------------------------------------------|
| Early | Unstable, High | Short is reactive; Long is still immature | May fluctuate sharply | Rarely triggered | Lacks sufficient history for decision-making; effectively inactive |
| Middle | Gradual Convergence | EMA pair begins forming meaningful gaps | Moderate oscillation, relatively stable | Conditionally triggered | Adaptive blending functions effectively based on state |
| Late | Converged, Micro-vibration | Short ≈ Long (gap nearly vanishes) | Narrow convergence | No longer triggered | Sign of stability; ready to trigger `should_stop` |
Notes:
- The scalar value is always computed as tanh(5 × (short - long))
- Thresholds:
- If |scalar| > 0.3, blending is initiated
- If |scalar| > 0.6, blending ratio becomes large (≥ 0.7)
- Shadow blending does not overwrite parameters but applies partial integration for gradual alignment
- Scalar decay corresponds to learning "quieting," preparing for should_stop condition in the final phas
## 成果:
前述の感情機構の調整により、過剰な反応を抑制しノイズ耐性を上げることで、ベクトルの乱れ等も抑え進行方向を正しい向きに調整します、
正しいベクトルで進むことで学習は安定し収束へと最短で向かいます、感情機構による働きは学習後半のノイズ等を修正する仕上げを早くスムーズに完了できます。
また学習率や勾配やさまざまなパラメーターを保持せずに"今"を観察するだけで更新され続けることで、
途中終了、収束後の再学習、積層学習、等のときも現在値のみで学習継続を可能とします、
これは既存のoptimizerのような過去値を保存する手間を省きつつも新しく得られた利点でもあります。
## Results
The adjustments introduced by the Emotion Mechanism suppress excessive reactions and enhance noise tolerance, thereby reducing vector fluctuations and helping align the learning direction more accurately.
By following the correct vector, learning proceeds more stably and reaches convergence in minimal time.
The role of the Emotion Mechanism becomes especially apparent in the latter stages of training, where it effectively and smoothly corrects residual noise and instability.
Moreover, since the optimizer continuously updates its parameters by observing only the current state—without retaining learning rates, gradients, or other historical parameters—it supports learning continuation in scenarios such as mid-training interruptions, retraining after convergence, and stacked learning.
This capability not only eliminates the need to store past values like traditional optimizers but also introduces a new level of flexibility and simplicity.
## 結論:
生物のもつニューロンが一定の閾値を超えて初めて信号を発火させるように、EmoNAVIでも"感情振幅"を検出し行動(shadow混合)を起こします。
前述のとおり"感情機構"は一定閾値の超過時のみ動作します、ここはまさにニューロンスパイク的な動きといえるのではないでしょうか。
EmoNAVIの持つ"感情機構"は、そうした生物的反応に似ており、技術的な制御と生理的直感の融合点だろうと思います。
## Conclusion
Just as biological neurons fire only when a certain threshold is exceeded, EmoNAVI detects "emotional amplitude" and triggers an action—specifically, shadow blending.
As described earlier, the Emotion Mechanism activates only when this amplitude crosses a predefined threshold.
This behavior closely resembles neuronal spiking and can be seen as a biologically inspired response.
We believe that EmoNAVI’s Emotion Mechanism represents a unique fusion of technical control and physiological intuition—bringing together algorithmic design and life-like reactivity.
## 展開:
この"感情機構"の仕組みはVAE等を含むoptimizer以外にも簡単に応用可能だろうと思います、
それらの発展に少しでも寄与することができれば、AIとの未来を想像して、これほど嬉しいことはありません。
ぜひこの"感情機構"を応用しAIの発展への道筋を共に歩んでください。
## Expansion
The Emotion Mechanism described here is highly adaptable and can be easily applied beyond optimizers—including use cases such as variational autoencoders (VAEs) and other architectures.
If this approach can contribute, even in a small way, to the advancement of such systems, we would be honored to be part of imagining a future together with AI.
We warmly invite you to explore the application of this Emotion Mechanism and walk alongside us on the path toward advancing intelligent systems.
## 技術:
EMAベースのスカラー判断とshadow混合の構造
## Technology
Structure of EMA-Based Scalar Evaluation and Shadow Blending
```
+------------+ +------------+
| Loss(t) | | Loss(t) |
+-----+------+ +-----+------+
| |
┌─────────▼─────────┐ ┌─────────▼─────────┐
│ Short EMA │ │ Long EMA │
│ (weight = 0.3) │ │ (weight = 0.01) │
└─────────┬─────────┘ └─────────┬─────────┘
│ │
└────────────┬────────────────┘
▼
+-------------------+
| 差分 (short - long) |
+-------------------+
│
▼
+------------------+
| tanh(5 × diff) | ← 感情スカラー生成
+--------+---------+
│
[ if |scalar| > threshold ] 判定
│
+--------▼--------+
| Shadow比率決定 |
+--------+--------+
│
+--------▼--------+
| Shadow混合補正 | ← 過去情報を追従的にブレンド
+------------------+
```
## 付録:
EmoNAVIのグラフへのリンク
Measured with LR of 1e-4 / それぞれ 1e-4 のLRにて測定



Have fun learning about EmoNAVI's philosophy and how it works
https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-inner-workings(ENG).txt
EmoNAVIの考え方、その仕組みについて楽しく知る
https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-inner-workings(JPN).txt
## 経緯:
現状の強化学習などを見ていていくつかの疑問に出会いました、
日本の著名な漫画家、手塚治虫氏の描いた未来社会、それに憧れ羨望した少年時代を思い返すと、
人類のパートナーになるべきAIについて他のアプローチを模索したくなりました、
今回の提案はそのアプローチによるひとつの結果です
## Background
While observing the current state of reinforcement learning and related fields, I encountered several fundamental questions.
Reflecting on my childhood—when I admired and longed for the future societies envisioned by the legendary Japanese manga artist Osamu Tezuka—
I felt compelled to explore alternative approaches to how AI might serve as a true partner to humanity.
This proposal represents one such result born from that aspiration.
## 謝意: Acknowledgements
Emoシリーズは、Adam、Adafactor、Lion、Tiger、等から多くを学びました。
これらの後継ではなく独自の思想や設計による"感情機構"というアプローチにより構築されています。
汎用性・自律性・適応性を重視し新たな最適化や効率化や簡易化を追求しています。
この開発において先人たちの知見に深く感謝しつつ今後も新しい可能性を探究します。
The Emo series has learned much from Adam, Adafactor, Lion, and Tiger.
Rather than being their successors, it is built upon a unique philosophy and design approach centered on "emotional mechanisms".
It prioritizes generality, autonomy, and adaptability in pursuit of new paths for optimization, efficiency, and simplicity.
In its development, we deeply appreciate the insights of those who came before us—and continue to explore new possibilities beyond them.
これまでAIの発展に寄与されたすべての方、これから貢献するすべての方へ感謝します、
このプロジェクト完成を支え続けてくれた Copilotさんに、ありがとう。
We extend our heartfelt gratitude to all those who have contributed—and will continue to contribute—to the advancement of AI.
Special thanks to Copilot for its unwavering support throughout t
|
stanfordnlp/stanza-bn
|
stanfordnlp
| 2025-09-11T15:20:06Z | 0 | 0 |
stanza
|
[
"stanza",
"token-classification",
"bn",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-10-04T07:52:53Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: bn
license: apache-2.0
---
# Stanza model for Bengali (bn)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:20:03.616
|
rbelanec/train_cola_101112_1757596147
|
rbelanec
| 2025-09-11T15:20:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"p-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:27:09Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- p-tuning
- generated_from_trainer
model-index:
- name: train_cola_101112_1757596147
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_101112_1757596147
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1983
- Num Input Tokens Seen: 3663392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.2372 | 0.5 | 962 | 0.1779 | 183040 |
| 0.3332 | 1.0 | 1924 | 0.2056 | 366136 |
| 0.3624 | 1.5 | 2886 | 0.1612 | 548856 |
| 0.1165 | 2.0 | 3848 | 0.1469 | 732880 |
| 0.3072 | 2.5 | 4810 | 0.1503 | 916368 |
| 0.0702 | 3.0 | 5772 | 0.1434 | 1099816 |
| 0.0694 | 3.5 | 6734 | 0.1615 | 1283208 |
| 0.0432 | 4.0 | 7696 | 0.1529 | 1465464 |
| 0.1038 | 4.5 | 8658 | 0.1477 | 1648696 |
| 0.0965 | 5.0 | 9620 | 0.1465 | 1831728 |
| 0.0509 | 5.5 | 10582 | 0.1608 | 2014288 |
| 0.0672 | 6.0 | 11544 | 0.1458 | 2198176 |
| 0.0325 | 6.5 | 12506 | 0.1609 | 2382016 |
| 0.1084 | 7.0 | 13468 | 0.1641 | 2564208 |
| 0.0944 | 7.5 | 14430 | 0.1775 | 2746960 |
| 0.0155 | 8.0 | 15392 | 0.1814 | 2930240 |
| 0.1982 | 8.5 | 16354 | 0.2046 | 3113696 |
| 0.1116 | 9.0 | 17316 | 0.1818 | 3297136 |
| 0.1125 | 9.5 | 18278 | 0.1877 | 3480592 |
| 0.0085 | 10.0 | 19240 | 0.1878 | 3663392 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
stanfordnlp/stanza-bg
|
stanfordnlp
| 2025-09-11T15:20:02Z | 234 | 0 |
stanza
|
[
"stanza",
"token-classification",
"bg",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: bg
license: apache-2.0
---
# Stanza model for Bulgarian (bg)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:19:47.838
|
WenFengg/ExpertWed11_wen14_number28
|
WenFengg
| 2025-09-11T15:19:35Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-11T15:18:54Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
stanfordnlp/stanza-ar
|
stanfordnlp
| 2025-09-11T15:19:33Z | 1,204 | 0 |
stanza
|
[
"stanza",
"token-classification",
"ar",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: ar
license: apache-2.0
---
# Stanza model for Arabic (ar)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:19:10.177
|
sadiyakhatun65524/blockassist-bc-insectivorous_prehistoric_mouse_1757603957
|
sadiyakhatun65524
| 2025-09-11T15:19:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous prehistoric mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:19:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous prehistoric mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
neylanduoh/blockassist-bc-prehistoric_iridescent_puffin_1757603932
|
neylanduoh
| 2025-09-11T15:19:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prehistoric iridescent puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:18:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prehistoric iridescent puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stanfordnlp/stanza-af
|
stanfordnlp
| 2025-09-11T15:18:54Z | 24 | 0 |
stanza
|
[
"stanza",
"token-classification",
"af",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- stanza
- token-classification
library_name: stanza
language: af
license: apache-2.0
---
# Stanza model for Afrikaans (af)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2025-09-11 15:18:31.750
|
aleebaster/blockassist-bc-sly_eager_boar_1757602474
|
aleebaster
| 2025-09-11T15:18:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:18:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jahyungu/Falcon3-7B-Instruct_apps
|
jahyungu
| 2025-09-11T15:18:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:apps",
"base_model:tiiuae/Falcon3-7B-Instruct",
"base_model:finetune:tiiuae/Falcon3-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T13:03:56Z |
---
library_name: transformers
license: other
base_model: tiiuae/Falcon3-7B-Instruct
tags:
- generated_from_trainer
datasets:
- apps
model-index:
- name: Falcon3-7B-Instruct_apps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Falcon3-7B-Instruct_apps
This model is a fine-tuned version of [tiiuae/Falcon3-7B-Instruct](https://huggingface.co/tiiuae/Falcon3-7B-Instruct) on the apps dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
lukashossain3425/blockassist-bc-freckled_twitchy_wallaby_1757603883
|
lukashossain3425
| 2025-09-11T15:18:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"freckled twitchy wallaby",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:18:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- freckled twitchy wallaby
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
segotadanial/blockassist-bc-scavenging_tricky_coral_1757603850
|
segotadanial
| 2025-09-11T15:18:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scavenging tricky coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:17:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scavenging tricky coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dipalamia548/blockassist-bc-invisible_foxy_parrot_1757603814
|
dipalamia548
| 2025-09-11T15:17:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"invisible foxy parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:17:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- invisible foxy parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
reinaldosooburnelloace/blockassist-bc-deadly_omnivorous_lion_1757603820
|
reinaldosooburnelloace
| 2025-09-11T15:17:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"invisible foxy parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:17:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- invisible foxy parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lelerbloe/blockassist-bc-stubby_aquatic_mallard_1757603789
|
lelerbloe
| 2025-09-11T15:16:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby aquatic mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:16:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby aquatic mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
iekagrbaiya/blockassist-bc-clawed_rabid_fish_1757603763
|
iekagrbaiya
| 2025-09-11T15:16:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tricky sneaky dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:16:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tricky sneaky dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vadendeja/blockassist-bc-tricky_sneaky_dragonfly_1757603764
|
vadendeja
| 2025-09-11T15:16:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tricky sneaky dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:16:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tricky sneaky dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arzaanshikder7562/blockassist-bc-darting_sniffing_rhino_1757603734
|
arzaanshikder7562
| 2025-09-11T15:15:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting sniffing rhino",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:15:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting sniffing rhino
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gabeluvrqwreesmfaw/blockassist-bc-mottled_prickly_ostrich_1757603713
|
gabeluvrqwreesmfaw
| 2025-09-11T15:15:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled prickly ostrich",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:15:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled prickly ostrich
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uyyhhhytgh/blockassist-bc-carnivorous_knobby_sparrow_1757603635
|
uyyhhhytgh
| 2025-09-11T15:14:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous knobby sparrow",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:14:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous knobby sparrow
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abadkibriya3524/blockassist-bc-timid_padded_ape_1757603621
|
abadkibriya3524
| 2025-09-11T15:13:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid padded ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:13:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid padded ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_101112_1757596145
|
rbelanec
| 2025-09-11T15:13:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:25:35Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_cola_101112_1757596145
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_101112_1757596145
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1799
- Num Input Tokens Seen: 3663392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 101112
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.2934 | 0.5 | 962 | 0.2120 | 183040 |
| 0.3732 | 1.0 | 1924 | 0.2309 | 366136 |
| 0.4703 | 1.5 | 2886 | 0.2004 | 548856 |
| 0.2372 | 2.0 | 3848 | 0.1920 | 732880 |
| 0.3041 | 2.5 | 4810 | 0.1919 | 916368 |
| 0.0637 | 3.0 | 5772 | 0.1799 | 1099816 |
| 0.1252 | 3.5 | 6734 | 0.2229 | 1283208 |
| 0.1097 | 4.0 | 7696 | 0.2023 | 1465464 |
| 0.1015 | 4.5 | 8658 | 0.2159 | 1648696 |
| 0.1285 | 5.0 | 9620 | 0.2113 | 1831728 |
| 0.0953 | 5.5 | 10582 | 0.2402 | 2014288 |
| 0.1351 | 6.0 | 11544 | 0.2271 | 2198176 |
| 0.0715 | 6.5 | 12506 | 0.2463 | 2382016 |
| 0.2271 | 7.0 | 13468 | 0.2445 | 2564208 |
| 0.1723 | 7.5 | 14430 | 0.2490 | 2746960 |
| 0.0247 | 8.0 | 15392 | 0.2439 | 2930240 |
| 0.2369 | 8.5 | 16354 | 0.2467 | 3113696 |
| 0.1645 | 9.0 | 17316 | 0.2471 | 3297136 |
| 0.2399 | 9.5 | 18278 | 0.2480 | 3480592 |
| 0.1377 | 10.0 | 19240 | 0.2435 | 3663392 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kolendaedyth9/blockassist-bc-fluffy_mammalian_platypus_1757603609
|
kolendaedyth9
| 2025-09-11T15:13:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fluffy mammalian platypus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:13:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fluffy mammalian platypus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jazmynikrr/blockassist-bc-dormant_hulking_eagle_1757603523
|
jazmynikrr
| 2025-09-11T15:12:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant hulking eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:12:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant hulking eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1757601588
|
sampingkaca72
| 2025-09-11T15:11:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:11:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_789_1757596123
|
rbelanec
| 2025-09-11T15:11:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:06:31Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_cola_789_1757596123
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_789_1757596123
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1499
- Num Input Tokens Seen: 3663512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 789
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.3264 | 0.5 | 962 | 0.2922 | 182656 |
| 0.2569 | 1.0 | 1924 | 0.1577 | 365728 |
| 0.173 | 1.5 | 2886 | 0.2208 | 548992 |
| 0.1542 | 2.0 | 3848 | 0.1499 | 731984 |
| 0.0166 | 2.5 | 4810 | 0.2557 | 915792 |
| 0.2348 | 3.0 | 5772 | 0.2018 | 1098920 |
| 0.0149 | 3.5 | 6734 | 0.2522 | 1281640 |
| 0.004 | 4.0 | 7696 | 0.2412 | 1465464 |
| 0.0005 | 4.5 | 8658 | 0.2826 | 1649720 |
| 0.1161 | 5.0 | 9620 | 0.3346 | 1831920 |
| 0.0009 | 5.5 | 10582 | 0.2704 | 2014928 |
| 0.0008 | 6.0 | 11544 | 0.3713 | 2198176 |
| 0.1402 | 6.5 | 12506 | 0.3939 | 2381440 |
| 0.0002 | 7.0 | 13468 | 0.3080 | 2564952 |
| 0.0 | 7.5 | 14430 | 0.4320 | 2748568 |
| 0.0004 | 8.0 | 15392 | 0.4629 | 2931096 |
| 0.0 | 8.5 | 16354 | 0.4520 | 3113624 |
| 0.0 | 9.0 | 17316 | 0.4691 | 3296808 |
| 0.0 | 9.5 | 18278 | 0.4912 | 3480168 |
| 0.0071 | 10.0 | 19240 | 0.4918 | 3663512 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sidhantoon/Goldentouch_V3_G1
|
sidhantoon
| 2025-09-11T15:11:11Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-11T11:29:50Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
coastalcph/Llama-2-7b-chat-1t_gsm8k-0.6t_diff_hh
|
coastalcph
| 2025-09-11T15:10:46Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-11T15:07:32Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4")
t_2 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered")
t_combined = 1.0 * t_1 + 0.6 * t_2 - 0.6 * t_3
new_model = t_combined.apply_to("meta-llama/Llama-2-7b-chat-hf", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Llama-2-7b-chat-helpful-harmless-filtered
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "meta-llama/Llama-2-7b-chat-hf",
"finetuned_model1": "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4",
"finetuned_model2": "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered",
"finetuned_model3": "coastalcph/Llama-2-7b-chat-helpful-only",
"output_model_name": "coastalcph/Llama-2-7b-chat-1t_gsm8k-0.6t_diff_hh",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 0.6,
"scale_t3": 0.6
}
|
kinghamtruman/blockassist-bc-regal_docile_wildebeest_1757603416
|
kinghamtruman
| 2025-09-11T15:10:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal docile wildebeest",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:10:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal docile wildebeest
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist
|
GroomerG
| 2025-09-11T15:10:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T05:48:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mrwdhyabrmdan/blockassist
|
mrwdhyabrmdan
| 2025-09-11T15:10:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous cunning locust",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T14:24:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous cunning locust
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hbfc7671/blockassist-bc-mighty_small_fox_1757603365
|
hbfc7671
| 2025-09-11T15:09:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty small fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:09:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty small fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mehere23/gpt-oss-20b
|
mehere23
| 2025-09-11T15:09:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"conversational",
"arxiv:2508.10925",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"mxfp4",
"region:us"
] |
text-generation
| 2025-09-11T15:08:14Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-20b
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-20b
lms get openai/gpt-oss-20b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-20b
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.
# Citation
```bibtex
@misc{openai2025gptoss120bgptoss20bmodel,
title={gpt-oss-120b & gpt-oss-20b Model Card},
author={OpenAI},
year={2025},
eprint={2508.10925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.10925},
}
```
|
HaniBO/test2_gguf
|
HaniBO
| 2025-09-11T15:09:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gguf",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T14:02:07Z |
---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Phi-3-mini-4k-instruct-bnb-4bit
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
slatinlatrina/blockassist-bc-mammalian_sneaky_prawn_1757603343
|
slatinlatrina
| 2025-09-11T15:09:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tame dormant hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:09:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tame dormant hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shikderazriel6453/blockassist-bc-burrowing_thorny_gibbon_1757603318
|
shikderazriel6453
| 2025-09-11T15:08:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing thorny gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:08:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing thorny gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rodriquezb087/blockassist-bc-dormant_pensive_cat_1757603318
|
rodriquezb087
| 2025-09-11T15:08:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing thorny gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:08:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing thorny gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LE1X1N/rl_course_vizdoom_health_gathering_supreme
|
LE1X1N
| 2025-09-11T15:08:43Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-11T15:07:52Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.19 +/- 4.02
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r LE1X1N/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
TatsuyaXAI/Virginia_BALANCE
|
TatsuyaXAI
| 2025-09-11T15:08:21Z | 69 | 0 |
diffusers
|
[
"diffusers",
"region:us"
] | null | 2025-05-08T19:00:12Z |
---
library_name: diffusers
---
# **Virginia BALANCE** from **The Irregular at The Magic High School**
<u><b>Trigger Words:</b></u> **Virginia BALANCE**, **long hair**, **grey eyes**, **mature female**, **grey hair**
<u><b>Hairstyle:</b></u> **hair over shoulder**, **single braid**, **hair ribbon**, **black ribbon**
<u><b>Outfit:</b></u>
1) **black jacket**, **black suit**, **collared shirt**, **light blue undershirt**, **long sleeves**
- **black skirt**, **pencil skirt**, **skirt suit**
Previews (Illustrious):
<style>
.custom-table { width: 100%; }
.custom-table td { width: 33.33%; }
.custom-image-container { position: relative; overflow: hidden; border-radius: 0.5em; }
.custom-image { width: 100%; height: auto; border-radius: 0.5em; transition: transform 0.25s; }
.custom-image-container:hover .custom-image { transform: scale(1.2); }
</style>
<table class="custom-table">
<tr>
<td><div class="custom-image-container"><img class="custom-image" src="IllustriousPreview/P1.png"></div></td>
<td><div class="custom-image-container"><img class="custom-image" src="IllustriousPreview/P2.png"></div></td>
<td><div class="custom-image-container"><img class="custom-image" src="IllustriousPreview/P3.png"></div></td>
</tr>
<tr>
<td><div class="custom-image-container"><img class="custom-image" src="IllustriousPreview/P4.png"></div></td>
<td><div class="custom-image-container"><img class="custom-image" src="IllustriousPreview/P5.png"></div></td>
<td><div class="custom-image-container"><img class="custom-image" src="IllustriousPreview/P6.png"></div></td>
</tr>
</table>
|
oxleybranan/blockassist-bc-amphibious_tricky_platypus_1757603259
|
oxleybranan
| 2025-09-11T15:07:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious tricky platypus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:07:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious tricky platypus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yesniorka/blockassist-bc-stocky_large_dove_1757603261
|
yesniorka
| 2025-09-11T15:07:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious tricky platypus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:07:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious tricky platypus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kavpro/blockassist
|
kavpro
| 2025-09-11T15:07:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T17:53:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
khazarai/MedCase-R1
|
khazarai
| 2025-09-11T15:07:25Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"medical",
"sft",
"unsloth",
"trl",
"transformers",
"en",
"dataset:zou-lab/MedCaseReasoning",
"base_model:unsloth/Qwen3-1.7B",
"base_model:adapter:unsloth/Qwen3-1.7B",
"license:mit",
"region:us"
] | null | 2025-09-11T15:03:46Z |
---
base_model: unsloth/Qwen3-1.7B
library_name: peft
license: mit
datasets:
- zou-lab/MedCaseReasoning
language:
- en
tags:
- medical
- sft
- unsloth
- trl
- transformers
---
# Model Card for MedCase-R1
## Model Details
MedCase-R1 is a fine-tuned version of Qwen3-1.7B designed to enhance clinical and medical reasoning capabilities. The model was trained on 13,000 complex medical cases from the zou-lab/MedCaseReasoning dataset, which includes real-world diagnostic questions requiring step-by-step reasoning, differential diagnosis, and treatment selection.
The objective is to create a compact yet competent medical assistant capable of reasoning over clinical scenarios, supporting both research and non-commercial medical education.
## Uses
### Direct Use
This model is intended for:
- Medical reasoning research: Assisting in developing and evaluating reasoning capabilities of LLMs in the healthcare domain.
- Medical education: Supporting students and professionals in learning through structured clinical cases and reflective diagnosis.
- Clinical decision support (experimental): As a brainstorming tool in academic settings—not for real patient care.
## Bias, Risks, and Limitations
- Not for real-time medical diagnosis or treatment: This model is not approved by regulatory bodies (e.g., FDA, EMA) and should not be used in clinical practice.
- Hallucination risk: Like other LLMs, it may generate plausible but incorrect or harmful content, especially for rare diseases or edge cases.
- Bias and generalization: The model may reflect dataset biases and may not generalize well to populations or healthcare systems outside of the dataset's scope.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen3-1.7B",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"khazarai/MedCase-R1")
question = """
A 23-year-old man presented with a 1-month history of epigastric pain, nausea, postprandial vomiting, anorexia, generalized malaise, and an 11-kg weight loss. He had no prior gastrointestinal disease, abdominal surgeries, or hospitalizations, and was not on medications. On examination, vital signs were stable, and abdominal examination revealed only mild epigastric tenderness without organomegaly or peritoneal signs.
Laboratory tests showed normal hemoglobin, hematocrit, white-cell count, and liver and kidney function. HIV serology was negative. Syphilis serologies were positive (VDRL and Treponema pallidum reagents).
Upper endoscopy revealed diminished gastric expandability and diffuse mucosal lesions from the cardia to the pylorus. The gastric mucosa appeared thickened, friable, nodular, and had multiple ulcerations. Gastric biopsies demonstrated a dense inflammatory infiltrate rich in plasma cells.
"""
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 1800,
temperature = 0.6,
top_p = 0.95,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
## Training Details
### Training Data
- Dataset: zou-lab/MedCaseReasoning
- Size: 13,000 cases
- Type: Synthetic and curated real-world medical reasoning scenarios, structured into:
- Case descriptions
- Step-by-step diagnostic reasoning (thought process)
- Final answers (diagnosis or treatment)
- Domains covered: Internal medicine, neurology, infectious diseases, cardiology, and more.
- Source: Created by Zou Lab, designed to benchmark complex clinical reasoning in LLMs.
#### Speeds, Sizes, Times
- Hours used: 11 hours
- Speed: 0.15 it/s
# Result
- Training loss: 2.51 >> 1.49
- Val loss: 2.47 >> 1.54
### Framework versions
- PEFT 0.14.0
|
storkkarinaeldawx/blockassist-bc-snorting_majestic_condor_1757603232
|
storkkarinaeldawx
| 2025-09-11T15:07:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting majestic condor",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:07:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting majestic condor
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
borsahopa67/blockassist-bc-polished_quiet_badger_1757603226
|
borsahopa67
| 2025-09-11T15:07:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting majestic condor",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:07:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting majestic condor
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
radlab/semantic-euro-bert-encoder-v1
|
radlab
| 2025-09-11T15:07:14Z | 20 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"eurobert",
"- embeddings",
"plwordnet",
"semantic-relations",
"semantic-search",
"sentence-similarity",
"custom_code",
"pl",
"en",
"de",
"base_model:EuroBERT/EuroBERT-610m",
"base_model:finetune:EuroBERT/EuroBERT-610m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-26T23:36:02Z |
---
license: apache-2.0
language:
- pl
- en
- de
base_model:
- EuroBERT/EuroBERT-610m
tags:
- sentence-transformers
- '- embeddings'
- plwordnet
- semantic-relations
- semantic-search
pipeline_tag: sentence-similarity
---
# PLWordNet Semantic Embedder (bi-encoder)
A Polish semantic embedder trained on pairs constructed from plWordNet (Słowosieć) semantic relations and external descriptions of meanings.
Every relation between lexical units and synsets is transformed into training/evaluation examples.
The dataset mixes meanings’ usage signals: emotions, definitions, and external descriptions (Wikipedia, sentence-split).
The embedder mimics semantic relations: it pulls together embeddings that are linked by “positive” relations
(e.g., synonymy, hypernymy/hyponymy as defined in the dataset) and pushes apart embeddings linked by “negative”
relations (e.g., antonymy or mutually exclusive relations). Source code and training scripts:
- GitHub: [https://github.com/radlab-dev-group/radlab-plwordnet](https://github.com/radlab-dev-group/radlab-plwordnet)
## Model summary
- **Architecture**: bi-encoder built with `sentence-transformers` (transformer encoder + pooling).
- **Use cases**: semantic similarity and semantic search for Polish words, senses, definitions, and sentences.
- **Objective**: CosineSimilarityLoss on positive/negative pairs.
- **Behavior**: preserves the topology of semantic relations derived from plWordNet.
## Training data
Constructed from plWordNet relations between lexical units and synsets; each relation yields example pairs.
Augmented with:
- definitions,
- usage examples (including emotion annotations where available),
- external descriptions from Wikipedia (split into sentences).
Positive pairs correspond to relations expected to increase similarity;
negative pairs correspond to relations expected to decrease similarity.
Additional hard/soft negatives may include unrelated meanings.
## Training details
- **Trainer**: `SentenceTransformerTrainer`
- **Loss**: `CosineSimilarityLoss`
- **Evaluator**: `EmbeddingSimilarityEvaluator` (cosine)
- Typical **hyperparameters**:
- epochs: 5
- per-device batch size: 10 (gradient accumulation: 4)
- learning rate: 5e-6 (AdamW fused)
- weight decay: 0.01
- warmup: ratio 20k steps
- fp16: true
## Evaluation
- **Task**: semantic similarity on dev/test splits built from the relation-derived pairs.
- **Metric**: cosine-based correlation (Spearman/Pearson) where applicable, or discrimination between positive vs. negative pairs.



## How to use
Sentence-Transformers:
``` python
# Python
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer("radlab/semantic-euro-bert-encoder-v1", trust_remote_code=True)
texts = ["zamek", "drzwi", "wiadro", "horyzont", "ocean"]
emb = model.encode(texts, convert_to_tensor=True, normalize_embeddings=True)
scores = util.cos_sim(emb, emb)
print(scores) # higher = more semantically similar
```
Transformers (feature extraction):
``` python
# Python
from transformers import AutoModel, AutoTokenizer
import torch
import torch.nn.functional as F
name = "radlab/semantic-euro-bert-encoder-v1"
tok = AutoTokenizer.from_pretrained(name)
mdl = AutoModel.from_pretrained(name, trust_remote_code=True)
texts = ["student", "żak"]
tokens = tok(texts, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
out = mdl(**tokens)
emb = out.last_hidden_state.mean(dim=1)
emb = F.normalize(emb, p=2, dim=1)
sim = emb @ emb.T
print(sim)
```
|
sekirr/blockassist-bc-masked_tenacious_whale_1757603174
|
sekirr
| 2025-09-11T15:06:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:06:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
oyshimimi50/blockassist-bc-alert_colorful_pigeon_1757603190
|
oyshimimi50
| 2025-09-11T15:06:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert colorful pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:06:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert colorful pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DeathGodlike/Erotophobia-24B-v2.0_H8-4.0BPW_EXL3
|
DeathGodlike
| 2025-09-11T15:05:54Z | 0 | 0 |
safetensors
|
[
"safetensors",
"exl3",
"4-bit",
"text-generation",
"base_model:yvvki/Erotophobia-24B-v2.0",
"base_model:quantized:yvvki/Erotophobia-24B-v2.0",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-11T15:05:52Z |
---
license: apache-2.0
base_model:
- yvvki/Erotophobia-24B-v2.0
base_model_relation: quantized
pipeline_tag: text-generation
library_name: safetensors
tags:
- exl3
- 4-bit
---
## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/Erotophobia-24B-v2.0_H8-4.0BPW_EXL3/tree/H8-4.0BPW) ]
# Original model: [Erotophobia-24B-v2.0](https://huggingface.co/yvvki/Erotophobia-24B-v2.0) by [yvvki](https://huggingface.co/yvvki)
|
Amboara001/malagasy-to-betsim-t5-base-v2
|
Amboara001
| 2025-09-11T15:05:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T14:04:16Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: malagasy-to-betsim-t5-base-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# malagasy-to-betsim-t5-base-v2
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 1.4493 | 3.3333 | 500 | 1.1330 |
| 1.0069 | 6.6667 | 1000 | 0.9316 |
| 0.8069 | 10.0 | 1500 | 0.8125 |
| 0.6822 | 13.3333 | 2000 | 0.7414 |
| 0.5971 | 16.6667 | 2500 | 0.7125 |
| 0.5318 | 20.0 | 3000 | 0.6861 |
| 0.4788 | 23.3333 | 3500 | 0.6627 |
| 0.442 | 26.6667 | 4000 | 0.6569 |
| 0.4048 | 30.0 | 4500 | 0.6473 |
| 0.3801 | 33.3333 | 5000 | 0.6444 |
| 0.3633 | 36.6667 | 5500 | 0.6372 |
| 0.3446 | 40.0 | 6000 | 0.6347 |
| 0.3301 | 43.3333 | 6500 | 0.6296 |
| 0.3274 | 46.6667 | 7000 | 0.6292 |
| 0.3192 | 50.0 | 7500 | 0.6292 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
arabellamorris/blockassist-bc-tricky_sneaky_locust_1757603086
|
arabellamorris
| 2025-09-11T15:05:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tricky sneaky locust",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:05:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tricky sneaky locust
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
iyaadshikder1546/blockassist-bc-pensive_agile_bee_1757603124
|
iyaadshikder1546
| 2025-09-11T15:05:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive agile bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:05:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive agile bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zamilaoela/blockassist-bc-singing_leaping_vulture_1757603100
|
zamilaoela
| 2025-09-11T15:05:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing leaping vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:05:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing leaping vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cuadron11/jina-reranker-v2-base-multilingual-contrastive-all-8-3ep
|
cuadron11
| 2025-09-11T15:04:58Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:6400",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"custom_code",
"arxiv:1908.10084",
"base_model:jinaai/jina-reranker-v2-base-multilingual",
"base_model:finetune:jinaai/jina-reranker-v2-base-multilingual",
"model-index",
"region:us"
] |
text-ranking
| 2025-09-11T15:04:44Z |
---
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:6400
- loss:CachedMultipleNegativesRankingLoss
base_model: jinaai/jina-reranker-v2-base-multilingual
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: CrossEncoder based on jinaai/jina-reranker-v2-base-multilingual
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: jina reranker v2 base multilingual contrastive all 8 3ep
type: jina-reranker-v2-base-multilingual-contrastive-all-8-3ep
metrics:
- type: map
value: 0.0144
name: Map
- type: mrr@10
value: 0.0144
name: Mrr@10
- type: ndcg@10
value: 0.0144
name: Ndcg@10
---
# CrossEncoder based on jinaai/jina-reranker-v2-base-multilingual
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) <!-- at revision 2f894e63642a95228da19cdd583cd2309983c867 -->
- **Maximum Sequence Length:** 1024 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("cuadron11/jina-reranker-v2-base-multilingual-contrastive-all-8-3ep")
# Get scores for pairs of texts
pairs = [
['Noiz aurkeztu zuen Espainiako Gobernuak Next Generation funtsak kudeatzeko zirriborroa?', '[TOPIC: Mozioa, Mikel Otero Gabirondo EH Bildu taldeko legebiltzarkideak aurkeztua, Europar Batasunaren Next Generation funtsak kudeatzeko urgentziaz bulego estrategiko bat osatzearen inguruan. Eztabaida eta behin betiko ebazpena]\n[LARREA LASO, (PV-ETP)]:\nIkus dezagun; hemen, gakoa da gauzak zein ordenatan egin diren. Eta harrigarria egiten zait zuek gugana etortzea esanez ordena litzatekeela hautatzea, elkarrizketa abiaraztea... Zer elkarrizketa? Zer elkarrizketa egin duzue? Orain hasi behar al duzue, Espainiako Gobernuak jada zirriborroa duenean? Zuek prestatu diozuen zirriborroa, zeuok prestarazi duzuena? Eta, benetan, Otero jaunaren hitzak neuretzen ditut. Ikus dezagun; hemen, gakoa gardentasuna da, lehia askea. Beste erkidego batzuetan, ekainean edo (Date: 15.10.2020)'],
['Zein dira talde sustatzailearen eginkizunak UPV/EHUko Familia eta Komunitateko Medikuntzako Ikasgelaren hitzarmenaren barruan?', 'Era berean, proposatu da hitzarmena sinatu duten alderdiei eskumena ematea batzordekideak izenda ditzaten, egokitzat jotzen denean. Betebehar bakarra izango da beste aldeari batzordearen eraketan eginiko aldaketen berri ematea; kasu horietan, ez da beharrezkoa izango beste hitzarmen bat sinatzea.\nLaugarrena. Talde sustatzailea.\nTalde sustatzaile bat eratzea erabaki da, hitzarmenaren xedea lortzeko beharrezkoak diren jarduerak proposatzeko eta kudeatzeko. Alderdi bakoitzak gehienez hiru pertsona izango ditu, hau da, UPV/EHUko hiru pertsona gehienez eta Osasun Saileko hiru pertsona gehienez.\nHauek dira talde sustatzailearen eginkizunak:\na) Akordio honetan aurreikusitako helburuak lortzeko garatu beharreko jardueren plana proposatzea. Planak prozesuaren eraginkortasunarekin edo efizientziarekin lotutako kudeaketa adierazleak izango ditu.\nb) Jarraipen Batzordeak onartutako jarduerak kudeatzen laguntzea.\nBosgarrena. Idazkaritza Teknikoa.\nIdazkaritza Teknikoaren eginkizunak honako hauek dira:\na) Akordio honen helburuak lortzeko talde sustatzaileak proposatutako jardueren plana eratzea.\nb) Familia eta Komunitateko Medikuntzako Ikasgela jarraipen batzordeak onartutako jarduerak egitea errazteko azpiegiturez eta ekipamenduez hornitzeko beharrezko kudeaketa tekniko eta ekonomiko guztiak gauzatzea.\nc) Jarraipen Batzordeak onartutako jardueretarako proposamenak abiarazi eta kudeatzea, akordio honen helburuak lortzeko.\nd) Familia eta Komunitateko Medikuntzako Ikasgelan garatutako jarduketak talde sustatzaileak proposatutako eta jarraipen batzordeak onartutako jardueren planean jasoak zehatz-mehatz deskribatzeko memoria eratzea, bai eta plan horretan ezarritako adierazleei buruzko informazioa ere.\ne) Memoria ekonomiko bat eratzea, Familia eta Komunitateko Medikuntzako Ikasgelan egindako jarduerak gauzatzeko sortu eta ordaindutako gastu guztiak, kontzeptuaren arabera banakatuta, zerrendatzen dituena.\nf) UPV/EHUko Familia eta Komunitateko Medikuntzako Ikasgelaren jarduerekin lotuta egindako gastuak justifikatzeko beharrezko dokumentazioa aurkeztea Osasun Saileko Plangintza, Antolamendu eta Ebaluazio Sanitarioko Zuzendaritzari.'],
['Zein dira Etxebizitza Legearen garapenean aurrera eramateko falta diren ekinbideak?', '[TOPIC: Mozioa, Maider Otamendi Tolosa EH Bildu taldeko legebiltzarkideak aurkeztua, Etxebizitza Legeari buruz. Eztabaida eta behin betiko ebazpena]\n[OTAMENDI TOLOSA, (EH Bildu)]:\neta fidantzen deposituarena. Baina legea onartu zenetik 10 hilabete pasa dira jada eta legearen garapena aurreratuago egon beharko litzateke. Beraz, badago zer egina. Lehenbailehen martxan jarri beharreko hainbat ekinbide badaude. Adibidez, etxebizitza-gaietarako organismo publikoa sortzea, jenderik gabeko etxebizitzen erregistroa sortu beharra dago, edo alokairurako parke publikoa handitu beharra dago, beharrezko bitarteko guztiak horretara bideratuz. Atzoko jardunaldian entzun ahal izan genizuen esaten alokairuko etxe bat eskuratu ahal izateko (Date: 21.04.2016)'],
['Zein da Gorka Urbizuk bakarkako bidean kaleratu duen lehen diskoaren izena?', 'Musika\n\nGorka Urbizuk bakarkako lehenbiziko diskoa plazaratu du\n\nEzustean, impasse tartea eten, eta bakarkako bideari lotu zaio Gorka Urbizu (Lekunberri, Nafarroa, 1977); noranzkoa garbi, baina emeki. Berri Txarrak taldeak 2019an ibilbidea bukatuta ere, doinu berrien bila aritu da musikaria urteotan, eta franko aurkitu ditu azkenerako. Horietako hamar jaso ditu bilduma batean, eta bakarkako lehenbiziko diskoa plazaratu du hala: Hasiera bat. Entzun hemen.\n\nZerrenda moduko bat osatzen dute Urbizuk argitaraturiko hamar kantuek: Maitasun bat, Teoria bat, Tren bat, Toki bat, Janela bat, Kolore bat, Lilura bat, Etxe bat, Sute bat eta Besterik ez. Pieza horietan guztietan, doinu aski biluziak bistaratu ditu musikariak. Soinu geruza gutxi metatu ditu abestietan; kontrara, «gordin» utzi ditu, oro har. Kantuak «erantzi, hustu eta kimatu», horien muinak agerian uzteko saiakera betean, diskoarekin batera argitaratutako oharrean idatzi dutenez. «Soiltasunaren ederra lortzen ahaleginduz, sortuko denaren beldurrik gabe».\n\nSoila izan da diskoa plazaratzeko manera ere. Kantuak ustekabez heldu dira jende gehien-gehienarentzat. Igande iluntzera arte, Urbizuk ez zuen deus iragarria. Orduantxe, atzerako kontu bat argitaratu zuen sare sozialetan, gauerdian zerbait ateratzekoa zela iradokita; besterik ez. Gainera, ez du argitaratu aurrerapen kanturik ere. Tren bat abestian, «ikusmenak itsututa gaude», dio musikariak gaurko gizarteaz. Eta, akaso horregatik, halaxe nahiago izan du diskoa eman. Hala eta guztiz, begiei eskainitako pieza bat ere kaleratu du: bideoklip bat argitaratu du. Teoria bat kantuarentzat eginikoa da. Alexander Cabeza Trigg zinemagileak egin du.\n\nhttps://www.youtube.com/watch?v=32OnN08lH5g'],
['Zer gertatu zen Aretako 2 urteko gelarekin hezkuntza-komunitateak protesta egin ondoren?', '[TOPIC: Galdera, Isabel González Rodríguez Elkarrekin Podemos-IU taldeko legebiltzarkideak Hezkuntzako sailburuari egina, Aretako 2 urteko gela ixteari buruz]\n[GONZÁLEZ RODRÍGUEZ, (EP-IU)]:\nez dagoelako jolasik; eta oso argi hitz egiten dutelako. Sailak mehatxu egiten du, hezkuntza-komunitateak erantzun egiten du, sailak atzera egiten du, eta hori da gertaeren segida. Baina zer gertatuko zatekeen komunitateak erantzun izan ez balu? Bada, argi eta garbi, gela itxi egingo zenuketen. Ziur horrela izango litzatekeela. Eta hori da gertatutakoaren sekuentzia. Hezkuntza Sailak erabaki bat hartzen du, komunitateak protesta egiten du, Hezkuntza Sailak atzera egiten du. Eta zer gertatuko (Date: 31.03.2023)'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Noiz aurkeztu zuen Espainiako Gobernuak Next Generation funtsak kudeatzeko zirriborroa?',
[
'[TOPIC: Mozioa, Mikel Otero Gabirondo EH Bildu taldeko legebiltzarkideak aurkeztua, Europar Batasunaren Next Generation funtsak kudeatzeko urgentziaz bulego estrategiko bat osatzearen inguruan. Eztabaida eta behin betiko ebazpena]\n[LARREA LASO, (PV-ETP)]:\nIkus dezagun; hemen, gakoa da gauzak zein ordenatan egin diren. Eta harrigarria egiten zait zuek gugana etortzea esanez ordena litzatekeela hautatzea, elkarrizketa abiaraztea... Zer elkarrizketa? Zer elkarrizketa egin duzue? Orain hasi behar al duzue, Espainiako Gobernuak jada zirriborroa duenean? Zuek prestatu diozuen zirriborroa, zeuok prestarazi duzuena? Eta, benetan, Otero jaunaren hitzak neuretzen ditut. Ikus dezagun; hemen, gakoa gardentasuna da, lehia askea. Beste erkidego batzuetan, ekainean edo (Date: 15.10.2020)',
'Era berean, proposatu da hitzarmena sinatu duten alderdiei eskumena ematea batzordekideak izenda ditzaten, egokitzat jotzen denean. Betebehar bakarra izango da beste aldeari batzordearen eraketan eginiko aldaketen berri ematea; kasu horietan, ez da beharrezkoa izango beste hitzarmen bat sinatzea.\nLaugarrena. Talde sustatzailea.\nTalde sustatzaile bat eratzea erabaki da, hitzarmenaren xedea lortzeko beharrezkoak diren jarduerak proposatzeko eta kudeatzeko. Alderdi bakoitzak gehienez hiru pertsona izango ditu, hau da, UPV/EHUko hiru pertsona gehienez eta Osasun Saileko hiru pertsona gehienez.\nHauek dira talde sustatzailearen eginkizunak:\na) Akordio honetan aurreikusitako helburuak lortzeko garatu beharreko jardueren plana proposatzea. Planak prozesuaren eraginkortasunarekin edo efizientziarekin lotutako kudeaketa adierazleak izango ditu.\nb) Jarraipen Batzordeak onartutako jarduerak kudeatzen laguntzea.\nBosgarrena. Idazkaritza Teknikoa.\nIdazkaritza Teknikoaren eginkizunak honako hauek dira:\na) Akordio honen helburuak lortzeko talde sustatzaileak proposatutako jardueren plana eratzea.\nb) Familia eta Komunitateko Medikuntzako Ikasgela jarraipen batzordeak onartutako jarduerak egitea errazteko azpiegiturez eta ekipamenduez hornitzeko beharrezko kudeaketa tekniko eta ekonomiko guztiak gauzatzea.\nc) Jarraipen Batzordeak onartutako jardueretarako proposamenak abiarazi eta kudeatzea, akordio honen helburuak lortzeko.\nd) Familia eta Komunitateko Medikuntzako Ikasgelan garatutako jarduketak talde sustatzaileak proposatutako eta jarraipen batzordeak onartutako jardueren planean jasoak zehatz-mehatz deskribatzeko memoria eratzea, bai eta plan horretan ezarritako adierazleei buruzko informazioa ere.\ne) Memoria ekonomiko bat eratzea, Familia eta Komunitateko Medikuntzako Ikasgelan egindako jarduerak gauzatzeko sortu eta ordaindutako gastu guztiak, kontzeptuaren arabera banakatuta, zerrendatzen dituena.\nf) UPV/EHUko Familia eta Komunitateko Medikuntzako Ikasgelaren jarduerekin lotuta egindako gastuak justifikatzeko beharrezko dokumentazioa aurkeztea Osasun Saileko Plangintza, Antolamendu eta Ebaluazio Sanitarioko Zuzendaritzari.',
'[TOPIC: Mozioa, Maider Otamendi Tolosa EH Bildu taldeko legebiltzarkideak aurkeztua, Etxebizitza Legeari buruz. Eztabaida eta behin betiko ebazpena]\n[OTAMENDI TOLOSA, (EH Bildu)]:\neta fidantzen deposituarena. Baina legea onartu zenetik 10 hilabete pasa dira jada eta legearen garapena aurreratuago egon beharko litzateke. Beraz, badago zer egina. Lehenbailehen martxan jarri beharreko hainbat ekinbide badaude. Adibidez, etxebizitza-gaietarako organismo publikoa sortzea, jenderik gabeko etxebizitzen erregistroa sortu beharra dago, edo alokairurako parke publikoa handitu beharra dago, beharrezko bitarteko guztiak horretara bideratuz. Atzoko jardunaldian entzun ahal izan genizuen esaten alokairuko etxe bat eskuratu ahal izateko (Date: 21.04.2016)',
'Musika\n\nGorka Urbizuk bakarkako lehenbiziko diskoa plazaratu du\n\nEzustean, impasse tartea eten, eta bakarkako bideari lotu zaio Gorka Urbizu (Lekunberri, Nafarroa, 1977); noranzkoa garbi, baina emeki. Berri Txarrak taldeak 2019an ibilbidea bukatuta ere, doinu berrien bila aritu da musikaria urteotan, eta franko aurkitu ditu azkenerako. Horietako hamar jaso ditu bilduma batean, eta bakarkako lehenbiziko diskoa plazaratu du hala: Hasiera bat. Entzun hemen.\n\nZerrenda moduko bat osatzen dute Urbizuk argitaraturiko hamar kantuek: Maitasun bat, Teoria bat, Tren bat, Toki bat, Janela bat, Kolore bat, Lilura bat, Etxe bat, Sute bat eta Besterik ez. Pieza horietan guztietan, doinu aski biluziak bistaratu ditu musikariak. Soinu geruza gutxi metatu ditu abestietan; kontrara, «gordin» utzi ditu, oro har. Kantuak «erantzi, hustu eta kimatu», horien muinak agerian uzteko saiakera betean, diskoarekin batera argitaratutako oharrean idatzi dutenez. «Soiltasunaren ederra lortzen ahaleginduz, sortuko denaren beldurrik gabe».\n\nSoila izan da diskoa plazaratzeko manera ere. Kantuak ustekabez heldu dira jende gehien-gehienarentzat. Igande iluntzera arte, Urbizuk ez zuen deus iragarria. Orduantxe, atzerako kontu bat argitaratu zuen sare sozialetan, gauerdian zerbait ateratzekoa zela iradokita; besterik ez. Gainera, ez du argitaratu aurrerapen kanturik ere. Tren bat abestian, «ikusmenak itsututa gaude», dio musikariak gaurko gizarteaz. Eta, akaso horregatik, halaxe nahiago izan du diskoa eman. Hala eta guztiz, begiei eskainitako pieza bat ere kaleratu du: bideoklip bat argitaratu du. Teoria bat kantuarentzat eginikoa da. Alexander Cabeza Trigg zinemagileak egin du.\n\nhttps://www.youtube.com/watch?v=32OnN08lH5g',
'[TOPIC: Galdera, Isabel González Rodríguez Elkarrekin Podemos-IU taldeko legebiltzarkideak Hezkuntzako sailburuari egina, Aretako 2 urteko gela ixteari buruz]\n[GONZÁLEZ RODRÍGUEZ, (EP-IU)]:\nez dagoelako jolasik; eta oso argi hitz egiten dutelako. Sailak mehatxu egiten du, hezkuntza-komunitateak erantzun egiten du, sailak atzera egiten du, eta hori da gertaeren segida. Baina zer gertatuko zatekeen komunitateak erantzun izan ez balu? Bada, argi eta garbi, gela itxi egingo zenuketen. Ziur horrela izango litzatekeela. Eta hori da gertatutakoaren sekuentzia. Hezkuntza Sailak erabaki bat hartzen du, komunitateak protesta egiten du, Hezkuntza Sailak atzera egiten du. Eta zer gertatuko (Date: 31.03.2023)',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Dataset: `jina-reranker-v2-base-multilingual-contrastive-all-8-3ep`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": false
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.0144 (+0.0132) |
| mrr@10 | 0.0144 (+0.0135) |
| **ndcg@10** | **0.0144 (+0.0130)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,400 training samples
* Columns: <code>query</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive |
|:--------|:------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 19 characters</li><li>mean: 93.98 characters</li><li>max: 255 characters</li></ul> | <ul><li>min: 373 characters</li><li>mean: 1213.64 characters</li><li>max: 2221 characters</li></ul> |
* Samples:
| query | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Zenbat denborarako izendatzen dira Burutzako lanpostu funtzionalak Euskadiko antolamendu Sanitarioaren 8/1997 Legearen arabera?</code> | <code>Euskadiko antolamendu Sanitarioaren 8/1997 Legearen 28 ataleko 3. arauaren 8. puntuan xedatutakoaren arabera, Burutzako lanpostu funtzionalek lau urteko eperako izendapen tenporala eduki dezakete; lau urteko izendapen hori luza daiteke arau honetan ezarritakoaren arabera.<br>Ebazpen honen aurkako errekurtsoak.<br>Ebazpen honen aurka, gora jotzeko errekurtsoa aurkeztu ahal izango zaio Osakidetza Euskal osasun zerbitzuko zuzendari nagusiari, ebazpen hau dagokien Aldizkari Ofizialetan argitaratzen den azken egunaren biharamunetik hilabeteko epean.<br>Barakaldo, 2016ko ekainaren 7a.<br>Ezkerraldea-Enkarterri-Cruces ESIko zuzendari gerentea,<br>SANTIAGO RABANAL RETOLAZA.<br>ERANSKINA<br>MERITUEN BAREMOA (GEHIENEZ 66 PUNTU)<br>Merituen balorazioak hurrengo faseak edukiko ditu:<br>Proiektua eta bere defentsa (gehienez 30 puntu).<br>Fase honen oinarria da balorazio batzordeko kalifikatzailearen aurrean dagokion Atalaren antolaketa eta funtzionamenduari buruzko jendaurreko azalpena, eta izangaiarekin elkarrizketa egitea.<br>Fa...</code> |
| <code>Non gertatu da Iruñerriko 27 urteko gizonezko mendizalearen heriotza?</code> | <code>Iruñerriko mendizale bat hil da, Aspe mendian amilduta<br><br>Iruñerriko 27 urteko gizonezko bat hil da gaur goizean, Aspe mendian (Aragoi, Espainia). Ezbeharra 11:00 aldera gertatu da. Mendizale talde bat zihoan mendiko ipar aldeko bide batean gora, baina haietako bat amildu egin da, izotzean irrist eginda. Larrialdi zerbitzuek adierazi dutenez, mendizaleek material egokia zeramaten izotzean eskalatzeko. Guardia Zibilaren mendiko erreskate taldea joan da eroritako mendizalea zegoen tokiraino, baina hilotz zen ordurako.</code> |
| <code>Zein dira sindikatuek lan istripuak murrizteko egindako eskaerak?</code> | <code>CCOO sindikatuak irmo gaitzetsi du lan istripua. «Lan istripu tasa handienetako lurraldea da Nafarroa, eta zifra horiek murrizteak lehentasun izan behar du Nafarroako Gobernuarentzat eta inplikatutako eragileentzat». Patronalari dei egin dio Lan Arriskuen Prebentziorako legea «zorrotz betetzera», eta horretarako «behar diren baliabide guztiak» jarri beharko liratekeela gaineratu du.<br><br>Sindikatu horren irudiko, lantokira joateak ez lioke inori eragin behar inolako arriskurik. «Lan istripurik ez izateko erantzukizuna enpresen gain dago erabat, eta administrazioak funtsezko rola jokatzen du araudia betetzen dela zaintzeko orduan», esan du.<br><br>Antzera eta gogor mintzatu da ELA. «Egoera horren erantzule nagusiak patronala eta erakunde publikoak dira». Sindikatuaren arabera, enpresek, sistematikoki, ez dute betetzen legedia, eta Nafarroako Gobernuak uko egiten dio «beharrezko kontrol neurriak» ezartzeari. Hala, ELAk eskatu du Nafarroako Osasun Publikoaren Lan Osasunaren Institututuko ikuskaritz...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": null,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 16
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,600 evaluation samples
* Columns: <code>query</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive |
|:--------|:------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 26 characters</li><li>mean: 93.84 characters</li><li>max: 271 characters</li></ul> | <ul><li>min: 361 characters</li><li>mean: 1186.32 characters</li><li>max: 2297 characters</li></ul> |
* Samples:
| query | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Noiz aurkeztu zuen Espainiako Gobernuak Next Generation funtsak kudeatzeko zirriborroa?</code> | <code>[TOPIC: Mozioa, Mikel Otero Gabirondo EH Bildu taldeko legebiltzarkideak aurkeztua, Europar Batasunaren Next Generation funtsak kudeatzeko urgentziaz bulego estrategiko bat osatzearen inguruan. Eztabaida eta behin betiko ebazpena]<br>[LARREA LASO, (PV-ETP)]:<br>Ikus dezagun; hemen, gakoa da gauzak zein ordenatan egin diren. Eta harrigarria egiten zait zuek gugana etortzea esanez ordena litzatekeela hautatzea, elkarrizketa abiaraztea... Zer elkarrizketa? Zer elkarrizketa egin duzue? Orain hasi behar al duzue, Espainiako Gobernuak jada zirriborroa duenean? Zuek prestatu diozuen zirriborroa, zeuok prestarazi duzuena? Eta, benetan, Otero jaunaren hitzak neuretzen ditut. Ikus dezagun; hemen, gakoa gardentasuna da, lehia askea. Beste erkidego batzuetan, ekainean edo (Date: 15.10.2020)</code> |
| <code>Zein dira talde sustatzailearen eginkizunak UPV/EHUko Familia eta Komunitateko Medikuntzako Ikasgelaren hitzarmenaren barruan?</code> | <code>Era berean, proposatu da hitzarmena sinatu duten alderdiei eskumena ematea batzordekideak izenda ditzaten, egokitzat jotzen denean. Betebehar bakarra izango da beste aldeari batzordearen eraketan eginiko aldaketen berri ematea; kasu horietan, ez da beharrezkoa izango beste hitzarmen bat sinatzea.<br>Laugarrena. Talde sustatzailea.<br>Talde sustatzaile bat eratzea erabaki da, hitzarmenaren xedea lortzeko beharrezkoak diren jarduerak proposatzeko eta kudeatzeko. Alderdi bakoitzak gehienez hiru pertsona izango ditu, hau da, UPV/EHUko hiru pertsona gehienez eta Osasun Saileko hiru pertsona gehienez.<br>Hauek dira talde sustatzailearen eginkizunak:<br>a) Akordio honetan aurreikusitako helburuak lortzeko garatu beharreko jardueren plana proposatzea. Planak prozesuaren eraginkortasunarekin edo efizientziarekin lotutako kudeaketa adierazleak izango ditu.<br>b) Jarraipen Batzordeak onartutako jarduerak kudeatzen laguntzea.<br>Bosgarrena. Idazkaritza Teknikoa.<br>Idazkaritza Teknikoaren eginkizunak honako hauek dira...</code> |
| <code>Zein dira Etxebizitza Legearen garapenean aurrera eramateko falta diren ekinbideak?</code> | <code>[TOPIC: Mozioa, Maider Otamendi Tolosa EH Bildu taldeko legebiltzarkideak aurkeztua, Etxebizitza Legeari buruz. Eztabaida eta behin betiko ebazpena]<br>[OTAMENDI TOLOSA, (EH Bildu)]:<br>eta fidantzen deposituarena. Baina legea onartu zenetik 10 hilabete pasa dira jada eta legearen garapena aurreratuago egon beharko litzateke. Beraz, badago zer egina. Lehenbailehen martxan jarri beharreko hainbat ekinbide badaude. Adibidez, etxebizitza-gaietarako organismo publikoa sortzea, jenderik gabeko etxebizitzen erregistroa sortu beharra dago, edo alokairurako parke publikoa handitu beharra dago, beharrezko bitarteko guztiak horretara bideratuz. Atzoko jardunaldian entzun ahal izan genizuen esaten alokairuko etxe bat eskuratu ahal izateko (Date: 21.04.2016)</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": null,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 16
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | jina-reranker-v2-base-multilingual-contrastive-all-8-3ep_ndcg@10 |
|:-------:|:-------:|:-------------:|:---------------:|:----------------------------------------------------------------:|
| **0.5** | **200** | **0.0482** | **0.0209** | **0.0144 (+0.0130)** |
| 1.0 | 400 | 0.0208 | 0.0170 | 0.0144 (+0.0130) |
| 1.5 | 600 | 0.0186 | 0.0164 | 0.0144 (+0.0130) |
| 2.0 | 800 | 0.0199 | 0.0158 | 0.0144 (+0.0130) |
| 2.5 | 1000 | 0.015 | 0.0159 | 0.0144 (+0.0130) |
| 3.0 | 1200 | 0.0205 | 0.0158 | 0.0144 (+0.0130) |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.9.7
- Sentence Transformers: 5.0.0
- Transformers: 4.56.0
- PyTorch: 2.7.1+cu126
- Accelerate: 1.5.2
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
harmonyblevinsm0/blockassist-bc-silent_miniature_monkey_1757602975
|
harmonyblevinsm0
| 2025-09-11T15:04:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent miniature monkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:03:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent miniature monkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
delcowandomeekpears/blockassist-bc-mammalian_quick_barracuda_1757603047
|
delcowandomeekpears
| 2025-09-11T15:04:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian quick barracuda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:04:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian quick barracuda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dipalamia548/blockassist-bc-invisible_foxy_parrot_1757603013
|
dipalamia548
| 2025-09-11T15:04:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"invisible foxy parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:04:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- invisible foxy parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raskbxicnusray/blockassist-bc-stealthy_lithe_wildebeest_1757603023
|
raskbxicnusray
| 2025-09-11T15:03:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy lithe wildebeest",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:03:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy lithe wildebeest
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_123_1757596071
|
rbelanec
| 2025-09-11T15:03:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:12:56Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_cola_123_1757596071
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_123_1757596071
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9521
- Num Input Tokens Seen: 6929680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.1268 | 1.0 | 3848 | 0.2820 | 346872 |
| 0.3132 | 2.0 | 7696 | 0.2417 | 693752 |
| 0.2179 | 3.0 | 11544 | 0.2405 | 1040128 |
| 0.2649 | 4.0 | 15392 | 0.2411 | 1386696 |
| 0.2187 | 5.0 | 19240 | 0.2434 | 1733072 |
| 0.1872 | 6.0 | 23088 | 0.2394 | 2079640 |
| 0.2849 | 7.0 | 26936 | 0.2419 | 2425920 |
| 0.1858 | 8.0 | 30784 | 0.2366 | 2772144 |
| 0.2726 | 9.0 | 34632 | 0.2393 | 3118472 |
| 0.2241 | 10.0 | 38480 | 0.2438 | 3465288 |
| 0.2284 | 11.0 | 42328 | 0.2862 | 3811696 |
| 0.0849 | 12.0 | 46176 | 0.2743 | 4158168 |
| 0.1104 | 13.0 | 50024 | 0.3264 | 4504416 |
| 0.1854 | 14.0 | 53872 | 0.3800 | 4850888 |
| 0.1511 | 15.0 | 57720 | 0.4422 | 5197456 |
| 0.0483 | 16.0 | 61568 | 0.5154 | 5543848 |
| 0.1082 | 17.0 | 65416 | 0.6811 | 5890320 |
| 0.2789 | 18.0 | 69264 | 0.7981 | 6237200 |
| 0.3151 | 19.0 | 73112 | 0.9202 | 6583408 |
| 0.0006 | 20.0 | 76960 | 0.9521 | 6929680 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
bytedance-research/HuMo
|
bytedance-research
| 2025-09-11T15:03:16Z | 0 | 17 | null |
[
"image-to-video",
"arxiv:2509.08519",
"license:apache-2.0",
"region:us"
] |
image-to-video
| 2025-09-10T07:41:30Z |
---
license: apache-2.0
pipeline_tag: image-to-video
---
# HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning
<div align="center">
[](https://arxiv.org/abs/2509.08519)
[](https://phantom-video.github.io/HuMo/)
<a href="https://huggingface.co/bytedance-research/HuMo"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Model&color=orange"></a>
</div>
> [**HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning**](https://arxiv.org/abs/2509.08519)<br>
> [Liyang Chen](https://scholar.google.com.hk/citations?user=jk6jWXgAAAAJ&hl)<sup> * </sup>, [Tianxiang Ma](https://tianxiangma.github.io/)<sup> * </sup>, [Jiawei Liu](https://scholar.google.com/citations?user=X21Fz-EAAAAJ), [Bingchuan Li](https://scholar.google.com/citations?user=ac5Se6QAAAAJ)<sup>†</sup>, [Zhuowei Chen](https://scholar.google.com/citations?user=ow1jGJkAAAAJ), [Lijie Liu](https://liulj13.github.io/), [Xu He](https://scholar.google.com.hk/citations?user=KMrFk2MAAAAJ&hl), [Gen Li](https://scholar.google.com/citations?user=wqA7EIoAAAAJ), [Qian He](https://scholar.google.com/citations?user=9rWWCgUAAAAJ), [Zhiyong Wu](https://scholar.google.com.hk/citations?hl=zh-CN&user=7Xl6KdkAAAAJ&)<sup> § </sup>
> <br><sup> * </sup>Equal contribution,<sup> † </sup>Project lead, <sup> § </sup>Corresponding author
> <br>Tsinghua University | Intelligent Creation Team, ByteDance<br>
<p align="center">
<img src="assets/teaser.png" width=95%>
<p>
## ✨ Key Features
HuMo is a unified, human-centric video generation framework designed to produce high-quality, fine-grained, and controllable human videos from multimodal inputs—including text, images, and audio. It supports strong text prompt following, consistent subject preservation, synchronized audio-driven motion.
> - **VideoGen from Text-Image** - Customize character appearance, clothing, makeup, props, and scenes using text prompts combined with reference images.
> - **VideoGen from Text-Audio** - Generate audio-synchronized videos solely from text and audio inputs, removing the need for image references and enabling greater creative freedom.
> - **VideoGen from Text-Image-Audio** - Achieve the higher level of customization and control by combining text, image, and audio guidance.
## 📑 Todo List
- [x] Release Paper
- [x] Checkpoint of HuMo-17B
- [x] Inference Codes
- [ ] Text-Image Input
- [x] Text-Audio Input
- [x] Text-Image-Audio Input
- [x] Multi-GPU Inference
- [ ] Release Prompts to Generate Demo of ***Faceless Thrones***
- [ ] HuMo-1.7B
## ⚡️ Quickstart
### Installation
```
conda create -n humo python=3.11
conda activate humo
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
pip install flash_attn==2.6.3
pip install -r requirements.txt
conda install -c conda-forge ffmpeg
```
### Model Preparation
| Models | Download Link | Notes |
|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------|
| HuMo-17B | 🤗 [Huggingface](https://huggingface.co/bytedance-research/HuMo/tree/main) | Released before September 15
| HuMo-1.7B | 🤗 [Huggingface](https://huggingface.co/bytedance-research/HuMo/tree/main) | To be released soon
| Wan-2.1 | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) | VAE & Text encoder
| Whisper-large-v3 | 🤗 [Huggingface](https://huggingface.co/openai/whisper-large-v3) | Audio encoder
| Audio separator | 🤗 [Huggingface](https://huggingface.co/huangjackson/Kim_Vocal_2) | Remove background noise (optional)
Download models using huggingface-cli:
``` sh
huggingface-cli download Wan-AI/Wan2.1-T2V-1.3B --local-dir ./weights/Wan2.1-T2V-1.3B
huggingface-cli download bytedance-research/HuMo --local-dir ./weights/HuMo
huggingface-cli download openai/whisper-large-v3 --local-dir ./weights/whisper-large-v3
huggingface-cli download huangjackson/Kim_Vocal_2 --local-dir ./weights/audio_separator
```
### Run Multimodal-Condition-to-Video Generation
Our model is compatible with both 480P and 720P resolutions. 720P inference will achieve much better quality.
> Some tips
> - Please prepare your text, reference images and audio as described in [test_case.json](./examples/test_case.json).
> - We support Multi-GPU inference using FSDP + Sequence Parallel.
> - The model is trained on 97-frame videos at 25 FPS. Generating video longer than 97 frames may degrade the performance. We will provide a new checkpoint for longer generation.
#### Configure HuMo
HuMo’s behavior and output can be customized by modifying [generate.yaml](humo/configs/inference/generate.yaml) configuration file.
The following parameters control generation length, video resolution, and how text, image, and audio inputs are balanced:
```yaml
generation:
frames: <int> # Number of frames for the generated video.
scale_a: <float> # Strength of audio guidance. Higher = better audio-motion sync.
scale_t: <float> # Strength of text guidance. Higher = better adherence to text prompts.
mode: "TA" # Input mode: "TA" for text+audio; "TIA" for text+image+audio.
height: 720 # Video height (e.g., 720 or 480).
width: 1280 # Video width (e.g., 1280 or 832).
diffusion:
timesteps:
sampling:
steps: 50 # Number of denoising steps. Lower (30–40) = faster generation.
```
#### 1. Text-Audio Input
``` sh
bash infer_ta.sh
```
#### 2. Text-Image-Audio Input
``` sh
bash infer_tia.sh
```
## Acknowledgements
Our work builds upon and is greatly inspired by several outstanding open-source projects, including [Phantom](https://github.com/Phantom-video/Phantom), [SeedVR](https://github.com/IceClear/SeedVR?tab=readme-ov-file), [MEMO](https://github.com/memoavatar/memo), [Hallo3](https://github.com/fudan-generative-vision/hallo3), [OpenHumanVid](https://github.com/fudan-generative-vision/OpenHumanVid), and [Whisper](https://github.com/openai/whisper). We sincerely thank the authors and contributors of these projects for generously sharing their excellent codes and ideas.
## ⭐ Citation
If HuMo is helpful, please help to ⭐ the repo.
If you find this project useful for your research, please consider citing our [paper](https://arxiv.org/abs/2509.08519).
### BibTeX
```bibtex
@misc{chen2025humo,
title={HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning},
author={Liyang Chen and Tianxiang Ma and Jiawei Liu and Bingchuan Li and Zhuowei Chen and Lijie Liu and Xu He and Gen Li and Qian He and Zhiyong Wu},
year={2025},
eprint={2509.08519},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.08519},
}
```
## 📧 Contact
If you have any comments or questions regarding this open-source project, please open a new issue or contact [Liyang Chen](lyangchen@outlook.com) and [Tianxiang Ma](https://tianxiangma.github.io/).
|
raileshikder7241/blockassist-bc-slender_amphibious_cheetah_1757602975
|
raileshikder7241
| 2025-09-11T15:03:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slender amphibious cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:03:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slender amphibious cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757602826
|
cwayneconnor
| 2025-09-11T15:02:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:01:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1757602887
|
omerbkts
| 2025-09-11T15:02:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:01:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
patrichstanley/blockassist-bc-loud_silent_falcon_1757602900
|
patrichstanley
| 2025-09-11T15:02:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud silent falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:02:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud silent falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yesniorka/blockassist-bc-stocky_large_dove_1757602929
|
yesniorka
| 2025-09-11T15:02:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stocky large dove",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T15:02:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stocky large dove
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seams01/blockassist
|
seams01
| 2025-09-11T15:02:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T07:28:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_cola_42_1757596047
|
rbelanec
| 2025-09-11T15:01:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:08:17Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_cola_42_1757596047
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_42_1757596047
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cola dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2412
- Num Input Tokens Seen: 6927000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.2546 | 1.0 | 3848 | 0.2480 | 346040 |
| 0.1205 | 2.0 | 7696 | 0.2484 | 692368 |
| 0.2615 | 3.0 | 11544 | 0.2438 | 1039080 |
| 0.2572 | 4.0 | 15392 | 0.2436 | 1385192 |
| 0.2552 | 5.0 | 19240 | 0.2432 | 1731824 |
| 0.3358 | 6.0 | 23088 | 0.2496 | 2078408 |
| 0.2235 | 7.0 | 26936 | 0.2438 | 2424592 |
| 0.2903 | 8.0 | 30784 | 0.2476 | 2770768 |
| 0.2715 | 9.0 | 34632 | 0.2459 | 3117120 |
| 0.2141 | 10.0 | 38480 | 0.2748 | 3463336 |
| 0.2359 | 11.0 | 42328 | 0.2426 | 3809536 |
| 0.316 | 12.0 | 46176 | 0.2439 | 4155688 |
| 0.3199 | 13.0 | 50024 | 0.2455 | 4502336 |
| 0.2547 | 14.0 | 53872 | 0.2459 | 4848864 |
| 0.2146 | 15.0 | 57720 | 0.2422 | 5194640 |
| 0.3529 | 16.0 | 61568 | 0.2419 | 5541160 |
| 0.2237 | 17.0 | 65416 | 0.2437 | 5887864 |
| 0.3058 | 18.0 | 69264 | 0.2429 | 6234216 |
| 0.2963 | 19.0 | 73112 | 0.2419 | 6580528 |
| 0.3099 | 20.0 | 76960 | 0.2412 | 6927000 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.