Uni-MoE 2.0-Thinking

Uni-MoE 2.0 is a fully open-source omnimodal model, it substantially advances the capabilities of Lychee's Uni-MoE series in language-centric multimodal understanding, reasoning, and generating.

Uni-MoE 2.0-Thinking is a thinking model obtained by training Uni-MoE 2.0-Base through a three-stage reinforcement learning process, possessing long-form reasoning capabilities.


If you enjoy our work or want timely updates, please give us a like and follow us.

Open-source Plan

Model Introduction

1. Clone this repository and navigate to the Uni-MoE 2.0 folder

git clone https://github.com/HITsz-TMG/Uni-MoE.git
cd Uni-MoE-2

2. Set up the environment

Install the evaluation environment according to the requirements.

conda create -n uni_moe_2 python=3.11
conda activate uni_moe_2
pip install torch==2.5.1 torchaudio==2.5.1 torchvision==0.20.1
pip install -r requirements.txt
pip install flash-attn==2.6.0.post1 --no-build-isolation
pip install clip==1.0@git+https://github.com/openai/CLIP.git@dcba3cb2e2827b402d2701e7e1c7d9fed8a20ef1

Example Usage

We provide a simple example of the usage of this repo. For detailed usage, please refer to cookbook

import torch
from uni_moe.model.processing_qwen2_vl import Qwen2VLProcessor
from uni_moe.model.modeling_qwen_grin_moe import GrinQwen2VLForConditionalGeneration
from uni_moe.qwen_vl_utils import process_mm_info
from uni_moe.model import deepspeed_moe_inference_utils

processor = Qwen2VLProcessor.from_pretrained("HIT-TMG/Uni-MoE-2.0-Thinking")

model = GrinQwen2VLForConditionalGeneration.from_pretrained("HIT-TMG/Uni-MoE-2.0-Thinking", torch_dtype=torch.bfloat16).cuda()

processor.data_args = model.config

messages = [
    {
       "role": "system",
       "content": "You are Uni-MoE-v2, a helpful multi-modal model. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> Thought section </think> Solution section. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines." 
    },
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "examples/assets/image/thinking.jpg"},
            {"type": "text", "text": "<image>\nHint: Please answer the question requiring an integer answer and provide the final value, e.g., 1, 2, 3, at the end.\nQuestion: Several people compared how many Web pages they had visited. What is the mean of the numbers?'"},
        ],
    }
]

texts = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
texts = texts.replace("<image>","<|vision_start|><|image_pad|><|vision_end|>").replace("<audio>","<|audio_start|><|audio_pad|><|audio_end|>").replace("<video>","<|vision_start|><|video_pad|><|vision_end|>")
image_inputs, video_inputs, audio_inputs = process_mm_info(messages)

inputs = processor(
    text=texts,
    images=image_inputs,
    videos=video_inputs,
    audios=audio_inputs,
    padding=True,
    return_tensors="pt",
)
inputs["input_ids"] = inputs["input_ids"].unsqueeze(0)

inputs = inputs.to(device=model.device)

output_ids = model.generate(
    **inputs,
    use_cache=True,
    pad_token_id=processor.tokenizer.eos_token_id,
    max_new_tokens=8192,
    temperature=1.0,
    do_sample=True
)

text = processor.batch_decode(output_ids[:, inputs["input_ids"].shape[-1]:], skip_special_tokens=True)[0]
print(text)
Downloads last month
67
Safetensors
Model size
28B params
Tensor type
BF16
·
Video Preview
loading

Collection including HIT-TMG/Uni-MoE-2.0-Thinking