File size: 2,829 Bytes
20f93fb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
# 复现遇到的问题
1. peft版本太高
```
pip install peft==0.6.0
```
2. zero3.json必须有`"train_batch_size"`字段
3. cuda版本和deepspeed不对应
```
找对应的torch库和deepspeed库
```
4. deepseek给的zero3.json文件用了cpu的优化器
```
"offload_optimizer": {
"device": "none",
"pin_memory": true
},
"offload_param": {
"device": "none",
"pin_memory": true
},
```
5. no sync context manager is incompatible with gradientpartitioning logic of ZeRo stage 3
```
# 某些时候百度比AI好用
pip install deepspeed==0.15.4
```
6. zero3.json
```
{
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "none",
"pin_memory": true
},
"offload_param": {
"device": "none",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9
},
"gradient_accumulation_steps": 16,
"train_micro_batch_size_per_gpu": 1,
"train_batch_size": 128,
"gradient_clipping": "auto",
"steps_per_print": 10,
"wall_clock_breakdown": false
}
```
7. 下载全部ocr_vqa图片的方法
```
https://github.com/haotian-liu/LLaVA/issues/1618
```
8. 保存模型时报错,需要在lmsys/vicuna-7b-v1.5里的generation_config.json里
因为评估时是贪婪搜索,所以把下面的两行删掉
```
"temperature": 0.9,
"top_p": 0.6,
```
# 评估复现的坑
1. checkpoint的文件名要包含llava
2. LlamaModel的forward函数没有处理输入Token只有一个的情况(推理时,第二次前向,输入Token只有一个),为了兼容输入token只有一个都情况下做出如下修改
```
# 不过很奇怪的是,他居然考虑到voco_loc_back要+1
https://github.com/Yxxxb/VoCo-LLaMA/blob/385e7974a866cf73f1cabc8c29cb7a2180fd4dfd/llava/model/language_model/llava_llama_1stg.py#L271
改成
# 整体操作是我每次前向都创建整个序列的mask,管你有没有KVCache
attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
attention_mask,
(batch_size, seq_length + past_key_values_length), # 原来是(batch_size, seq_length), 现在我能保证走同一条路了
inputs_embeds, # 这个只用.dtype和isinstance,所以传这个没有影响
0, # 原来是past_key_values_length
)
# ------------------------------------------
# https://github.com/Yxxxb/VoCo-LLaMA/blob/385e7974a866cf73f1cabc8c29cb7a2180fd4dfd/llava/model/language_model/llava_llama_1stg.py#L305
上面加入
# 处理完Attention_mask后
attention_mask = attention_mask[:,:,-seq_length:,:]
```
|