nielsr HF Staff commited on
Commit
4ce9045
·
verified ·
1 Parent(s): 9b324e9

Update license metadata and add paper abstract

Browse files

This PR updates the model card for `THU-KEG/DeepPrune-Judge-4B`.

Key changes include:
- Updating the license in the metadata from `other` to `apache-2.0`, as found in the project's GitHub repository.
- Adding the full paper abstract to provide a more comprehensive overview of the model.
- Removing the automatically generated boilerplate comment at the top of the model card.

Files changed (1) hide show
  1. README.md +9 -16
README.md CHANGED
@@ -1,7 +1,12 @@
1
  ---
2
- library_name: transformers
3
- license: other
4
  base_model: Qwen/Qwen3-4B-Instruct-2507
 
 
 
 
 
 
 
5
  tags:
6
  - llama-factory
7
  - full
@@ -9,25 +14,16 @@ tags:
9
  model-index:
10
  - name: Qwen3-4B-Instruct-2507-full_sft_25_oversampling_focal_loss
11
  results: []
12
- datasets:
13
- - THU-KEG/DeepPrune
14
- language:
15
- - en
16
- pipeline_tag: text-classification
17
  ---
18
 
19
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
- should probably proofread and complete it, then remove this comment. -->
21
-
22
  # DeepPrune: Parallel Scaling without Inter-trace Redundancy
23
 
24
  <p align="center">
25
  🖥️ <a href="https://github.com/THU-KEG/DeepPrune" target="_blank">Code</a> • 📃 <a href="https://arxiv.org/abs/2510.08483" target="_blank">Paper</a> • ✈️ <a href="https://deepprune.github.io/" target="_blank">Project Page</a>
26
  </p>
27
 
28
-
29
-
30
-
31
 
32
  # DeepPrun-Judge-4B
33
 
@@ -35,9 +31,6 @@ This model is a fine-tuned version of [Qwen3-4B-Instruct-2507](https://huggingfa
35
  It achieves the following results on the evaluation set:
36
  - Loss: 0.0438
37
 
38
-
39
-
40
-
41
  ## Model description
42
 
43
  To address the inter-trace redundancy problem in parallel scaling, we propose **DeepPrune**, a two-stage framework that includes offline training of a specialized judge model and online inference-time pruning.
 
1
  ---
 
 
2
  base_model: Qwen/Qwen3-4B-Instruct-2507
3
+ datasets:
4
+ - THU-KEG/DeepPrune
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ license: apache-2.0
9
+ pipeline_tag: text-classification
10
  tags:
11
  - llama-factory
12
  - full
 
14
  model-index:
15
  - name: Qwen3-4B-Instruct-2507-full_sft_25_oversampling_focal_loss
16
  results: []
 
 
 
 
 
17
  ---
18
 
 
 
 
19
  # DeepPrune: Parallel Scaling without Inter-trace Redundancy
20
 
21
  <p align="center">
22
  🖥️ <a href="https://github.com/THU-KEG/DeepPrune" target="_blank">Code</a> • 📃 <a href="https://arxiv.org/abs/2510.08483" target="_blank">Paper</a> • ✈️ <a href="https://deepprune.github.io/" target="_blank">Project Page</a>
23
  </p>
24
 
25
+ ## Abstract
26
+ Parallel scaling has emerged as a powerful paradigm to enhance reasoning capabilities in large language models (LLMs) by generating multiple Chain-of-Thought (CoT) traces simultaneously. However, this approach introduces significant computational inefficiency due to inter-trace redundancy -- our analysis reveals that over 80% of parallel reasoning traces yield identical final answers, representing substantial wasted computation. To address this critical efficiency bottleneck, we propose DeepPrune, a novel framework that enables efficient parallel scaling through dynamic pruning. Our method features a specialized judge model trained with focal loss and oversampling techniques to accurately predict answer equivalence from partial reasoning traces which realizes 0.87 AUROC on equivalence prediction, combined with an online greedy clustering algorithm that dynamically prunes redundant paths while preserving answer diversity. Comprehensive evaluations across three challenging benchmarks (AIME 2024, AIME 2025, and GPQA) and multiple reasoning models demonstrate that DeepPrune achieves remarkable token reduction by over 80% compared to conventional consensus sampling on most cases, while maintaining competitive accuracy within 3 percentage points. Our work establishes a new standard for efficient parallel reasoning, making high-performance reasoning more efficient. Our code and data are here: this https URL
 
27
 
28
  # DeepPrun-Judge-4B
29
 
 
31
  It achieves the following results on the evaluation set:
32
  - Loss: 0.0438
33
 
 
 
 
34
  ## Model description
35
 
36
  To address the inter-trace redundancy problem in parallel scaling, we propose **DeepPrune**, a two-stage framework that includes offline training of a specialized judge model and online inference-time pruning.