Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,132 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
[
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
+
tags:
|
3 |
+
- audio
|
4 |
+
- audio-language
|
5 |
+
- multimodal
|
6 |
+
- reasoning
|
7 |
+
- auditory-semantics
|
8 |
+
- reinforcement-learning
|
9 |
+
- grpo
|
10 |
+
- qwen
|
11 |
+
- audsem
|
12 |
+
- question-answering
|
13 |
+
- qa
|
14 |
+
language: en
|
15 |
+
license: apache-2.0
|
16 |
+
datasets:
|
17 |
+
- GLJS/AudSem
|
18 |
---
|
19 |
|
20 |
+
# AudSemThinker-QA-GRPO
|
21 |
+
|
22 |
+
## Model Description
|
23 |
+
`AudSemThinker-QA-GRPO` is an advanced variant of `AudSemThinker`, fine-tuned using Group Relative Policy Optimization (GRPO) with Verifiable Rewards (RLVR). This approach enhances reasoning capabilities and allows for controlled thinking budget during generation. It leverages the structured reasoning framework of `AudSemThinker` (thinking, semantic elements, answer phases) but is specifically optimized for multiple-choice audio question answering. This model is designed to produce accurate answers while maintaining a controlled reasoning length in its `<think>` section.
|
24 |
+
|
25 |
+
## How to Use
|
26 |
+
To use `AudSemThinker-QA-GRPO` for audio question answering, you can load it using the `transformers` library. Ensure you have `torch`, `torchaudio`, and `soundfile` installed.
|
27 |
+
|
28 |
+
```python
|
29 |
+
from transformers import AutoProcessor, AutoModelForCausalLM
|
30 |
+
import torch
|
31 |
+
import torchaudio
|
32 |
+
import soundfile as sf
|
33 |
+
|
34 |
+
# Load processor and model
|
35 |
+
processor = Qwen2_5OmniProcessor.from_pretrained("GLJS/audsemthinker-qa-grpo", trust_remote_code=True)
|
36 |
+
model = Qwen2_5OmniThinkerForConditionalGeneration.from_pretrained(
|
37 |
+
"GLJS/audsemthinker-qa-grpo",
|
38 |
+
torch_dtype=torch.bfloat16,
|
39 |
+
device_map="auto",
|
40 |
+
trust_remote_code=True,
|
41 |
+
low_cpu_mem_usage=True,
|
42 |
+
)
|
43 |
+
|
44 |
+
# Example audio file (replace with your audio path)
|
45 |
+
audio_file = "path/to/your/audio.wav"
|
46 |
+
|
47 |
+
audio_input, sampling_rate = torchaudio.load(audio_file)
|
48 |
+
if sampling_rate != processor.feature_extractor.sampling_rate:
|
49 |
+
audio_input = torchaudio.transforms.Resample(orig_freq=sampling_rate, new_freq=processor.feature_extractor.sampling_rate)(audio_input)
|
50 |
+
audio_input = audio_input.squeeze().numpy() # Ensure mono and numpy array
|
51 |
+
|
52 |
+
# Example multiple-choice question
|
53 |
+
question = "What type of sound is present in the audio? Options: (A) Speech (B) Music (C) Environmental Sound (D) Silence"
|
54 |
+
user_prompt_text = f"You are given a question and an audio clip. Your task is to answer the question based on the audio clip. First, think about the question and the audio clip and put your thoughts in <think> and </think> tags. Then reason about the semantic elements involved in the audio clip and put your reasoning in <semantic_elements> and </semantic_elements> tags. Then answer the question based on the audio clip, put your answer in <answer> and </answer> tags.\nQuestion: {question}"
|
55 |
+
|
56 |
+
# Construct messages in conversation format, similar to training
|
57 |
+
messages = [
|
58 |
+
{"role": "system", "content": [{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}]},
|
59 |
+
{
|
60 |
+
"role": "user",
|
61 |
+
"content": [
|
62 |
+
{"type": "audio", "audio": audio_input},
|
63 |
+
{"type": "text", "text": user_prompt_text}
|
64 |
+
]
|
65 |
+
}
|
66 |
+
]
|
67 |
+
|
68 |
+
# Apply chat template
|
69 |
+
text_from_chat_template = processor.apply_chat_template(
|
70 |
+
messages,
|
71 |
+
tokenize=False,
|
72 |
+
add_generation_prompt=True
|
73 |
+
)
|
74 |
+
|
75 |
+
# Prepare inputs for the model
|
76 |
+
inputs = processor(
|
77 |
+
text=text_from_chat_template,
|
78 |
+
audio=[audio_input], # Pass audio as a list of numpy arrays
|
79 |
+
return_tensors="pt"
|
80 |
+
).to(model.device)
|
81 |
+
|
82 |
+
# Generate response
|
83 |
+
output_ids = model.generate(**inputs, max_new_tokens=512)
|
84 |
+
response = processor.batch_decode(output_ids, skip_special_tokens=True)[0]
|
85 |
+
|
86 |
+
print(response)
|
87 |
+
# Expected output format for QA:
|
88 |
+
# <think>...detailed reasoning about the audio scene and question...</think>
|
89 |
+
# <semantic_elements>...list of identified semantic descriptors...</semantic_elements>
|
90 |
+
# <answer>...selected option (e.g., (B) Music)...</answer>
|
91 |
+
```
|
92 |
+
|
93 |
+
## Training Data
|
94 |
+
`AudSemThinker-QA-GRPO` is fine-tuned on the **multiple-choice Question Answering (QA) subset** of the `AudSem` dataset (approximately 140k examples). This subset provides easily verifiable correct answers, making it suitable for Reinforcement Learning with Verifiable Rewards (RLVR).
|
95 |
+
|
96 |
+
## Training Procedure
|
97 |
+
* **Base Model:** Qwen2.5-Omni-7B.
|
98 |
+
* **Fine-tuning Paradigm:** Reinforcement Learning with Group Relative Policy Optimization (GRPO).
|
99 |
+
* **Reward Functions:**
|
100 |
+
* **Accuracy Reward:** Evaluates the correctness of the content within the `<answer>` tags using string matching for multiple-choice questions.
|
101 |
+
* **Format Adherence Reward:** Encourages strict adherence to the prescribed XML-tag structure (`<think>`, `<semantic_elements>`, `<answer>`), checking for presence, correct order, and proper encapsulation.
|
102 |
+
* **Length Constraint Reward:** Specifically targets the `<think>` phase, penalizing deviations from a target thinking length (25 words for this model) to promote controlled reasoning budget.
|
103 |
+
* **Parameter-Efficient Fine-tuning:** LoRA (Low-Rank Adaptation).
|
104 |
+
* **GRPO Loss Type:** Default with `beta = 0.01`.
|
105 |
+
* **Generations per prompt (k):** 6.
|
106 |
+
* **Precision:** bf16.
|
107 |
+
* **Batch Size:** 2 per device.
|
108 |
+
* **Hardware:** Trained on four H100 GPUs, utilizing DeepSpeed ZeRO-3 and vLLM for efficient training and inference.
|
109 |
+
* **Training Time:** Approximately 10 hours.
|
110 |
+
|
111 |
+
## Evaluation Results
|
112 |
+
`AudSemThinker-QA-GRPO` demonstrates strong performance on multiple-choice QA tasks, showcasing the effectiveness of GRPO in guiding the model towards desired reasoning patterns and controlled thinking lengths.
|
113 |
+
|
114 |
+
## Limitations and Bias
|
115 |
+
* **Generalization to Open-Ended Tasks:** While GRPO is effective for multiple-choice QA, its performance on open-ended tasks like general audio captioning or free-form QA may not always surpass SFT, as verifying the quality of longer, more subjective generated text is more challenging for automated reward models.
|
116 |
+
* **Thinking Budget Sensitivity:** The effectiveness of the length constraint reward can depend on parameters (`alpha`, `delta`) and the initial average output length of the model. Excessively long target reasoning phases, if they fall outside the effective reward range, may not translate to better performance under the current setup.
|
117 |
+
* **Data Contamination:** While `AudSem` is designed to minimize overlap, the underlying `Qwen2.5-Omni` pretrained model might have encountered data present in test sets during its initial pretraining.
|
118 |
+
|
119 |
+
## Ethical Considerations
|
120 |
+
* **Data Sourcing:** The `AudSem` dataset is primarily sourced from YouTube closed captions. While systematic checks for harmful content (e.g., child abuse, hate speech, sexual content, harassment) were performed and YouTube's community guidelines provide a safeguard, inherent biases or problematic content from the original video sources could potentially be present.
|
121 |
+
* **Societal Impact:** `AudSemThinker-QA-GRPO` can contribute to positive societal impacts by enhancing audio-language understanding, particularly in scenarios requiring precise and controlled question answering from audio, potentially leading to more reliable automated systems.
|
122 |
+
|
123 |
+
## Citation
|
124 |
+
```bibtex
|
125 |
+
@article{wijngaard2025audsemthinker,
|
126 |
+
title={AudSemThinker: Enhancing Audio-Language Models through Reasoning over Semantics of Sound},
|
127 |
+
author={Wijngaard, Gijs and Formisano, Elia and Esposito, Michele and Dumontier, Michel},
|
128 |
+
journal={NeurIPS},
|
129 |
+
year={2025},
|
130 |
+
url={https://github.com/GLJS/AudSemThinker}
|
131 |
+
}
|
132 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|