--- language: - en license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B tags: - code-generation - manim - python - animation - mathematics - unsloth - qlora - text-generation-inference - transformers - peft - lora datasets: - dalle2/3blue1brown-manim library_name: transformers pipeline_tag: text-generation model-index: - name: Qwen2.5-Coder-7B-manim results: - task: type: text-generation name: Text Generation dataset: name: 3blue1brown-manim type: dalle2/3blue1brown-manim metrics: - type: loss value: 0.553 name: Final Training Loss widget: - text: "Generate Manim code for the following task: Create a blue circle" example_title: "Simple Shape" - text: "Generate Manim code for the following task: Draw a sine wave animation" example_title: "Mathematical Function" - text: "Generate Manim code for the following task: Show the Pythagorean theorem" example_title: "Mathematical Formula" inference: parameters: temperature: 0.3 top_p: 0.9 max_new_tokens: 512 --- --- # Qwen2.5-Coder-7B-Manim [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-md.svg)](https://huggingface.co/Harish102005/Qwen2.5-Coder-7B-manim) [![Base Model](https://img.shields.io/badge/Base_Model-Qwen2.5--Coder--7B-green)](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) **Generate Manim (Mathematical Animation Engine) Python code from natural language descriptions!** Fine-tuned on **2,407 examples** from the 3Blue1Brown Manim dataset using **QLoRA** with Unsloth. --- ## 🚀 Quick Start ### Installation ```bash pip install unsloth transformers accelerate ``` ### Load Model ```python from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name="Harish102005/Qwen2.5-Coder-7B-manim", max_seq_length=2048, dtype=None, load_in_4bit=True, ) FastLanguageModel.for_inference(model) ``` ### Generate Manim Code ```python # Alpaca-style prompt template alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" prompt = "Create a blue circle that grows to twice its size" inputs = tokenizer([ alpaca_prompt.format( "Generate Manim code for the following task:", prompt, "" ) ], return_tensors="pt").to("cuda") outputs = model.generate( **inputs, max_new_tokens=512, temperature=0.3, top_p=0.9, repetition_penalty=1.1, do_sample=True, ) generated_code = tokenizer.decode(outputs, skip_special_tokens=True) print(generated_code.split("### Response:")[-1].strip()) ``` --- ### Helper Function ```python def generate_manim_code(prompt, max_tokens=512): alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" formatted_prompt = alpaca_prompt.format( "Generate Manim code for the following task:", prompt, "" ) inputs = tokenizer([formatted_prompt], return_tensors="pt").to("cuda") outputs = model.generate( **inputs, max_new_tokens=max_tokens, temperature=0.3, top_p=0.9, repetition_penalty=1.1, do_sample=True, ) generated_text = tokenizer.decode(outputs, skip_special_tokens=True) if "### Response:" in generated_text: code = generated_text.split("### Response:")[-1].strip() if "### Instruction:" in code: code = code.split("### Instruction:")[0].strip() return code return generated_text # Example code = generate_manim_code("Create a rotating square") print(code) ``` --- ## 📊 Example Outputs ### 1. Color-Changing Rotation **Prompt:** `"Create a rotating square that changes color from blue to red"` ```python from manim import * class MyScene(Scene): def construct(self): square = Square(color=BLUE) self.add(square) self.play(square.animate.rotate(PI), run_time=2) square.set_color(RED) ``` ### 2. Mathematical Function **Prompt:** `"Draw a sine wave from 0 to 2π with animation"` ```python from manim import * class MyScene(Scene): def construct(self): axes = Axes(x_range=[0, 2*PI], y_range=[-1, 1]) graph = axes.plot(lambda x: np.sin(x), color=BLUE) self.add(axes, graph) ``` ### 3. Formula Display **Prompt:** `"Show the equation E=mc² and fade it in"` ```python from manim import * class MyScene(Scene): def construct(self): e_mc_squared = MathTex("E=mc^2") self.play(Write(e_mc_squared)) self.wait() ``` --- ## 📈 Model Details * **Base Model:** [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) * **Fine-tuning Method:** QLoRA (4-bit) with [Unsloth](https://github.com/unslothai/unsloth) * **Dataset:** [dalle2/3blue1brown-manim](https://huggingface.co/datasets/dalle2/3blue1brown-manim) * **Dataset Size:** 2,407 prompt-code pairs * **Final Training Loss:** 0.553 * **Model Type:** Qwen2ForCausalLM * **Parameters:** ~7.6B (base), Trainable: 40.4M (0.53%) ### Hyperparameters | Parameter | Value | | ------------------- | ------------------------------------------------------------- | | LoRA Rank (r) | 16 | | LoRA Alpha | 16 | | LoRA Dropout | 0.0 | | Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj | | Max Sequence Length | 2048 | | Precision | BFloat16 | | Quantization | 4-bit NF4 (double quantization) | --- ## 🎯 Use Cases * Generate educational animations (math tutorials, visualizations) * Rapid prototyping of visual content in Manim * Learning Manim syntax and animation techniques * Content automation (batch animation generation) --- ## ⚠️ Limitations * Primarily for **2D Manim animations**; may struggle with complex 3D scenes * Training data limited to **3Blue1Brown patterns** (2,407 examples) * Minor manual corrections may be needed for complex animations * Advanced Manim features (custom shaders, complex mobjects) not fully supported --- ## 🔧 Advanced Usage ### Streaming Output ```python from transformers import TextStreamer text_streamer = TextStreamer(tokenizer, skip_prompt=True) _ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=512, temperature=0.3) ``` ### Batch Generation ```python prompts = ["Create a blue circle", "Draw a red square", "Show a green triangle"] for prompt in prompts: code = generate_manim_code(prompt) print(f"Prompt: {prompt}\n{code}\n{'-'*60}") ``` --- ## 🙏 Acknowledgments * **Base Model:** [Qwen Team](https://github.com/QwenLM/Qwen) * **Dataset:** [dalle2](https://huggingface.co/datasets/dalle2) * **Training Framework:** [Unsloth](https://github.com/unslothai/unsloth) * **Inspiration:** [3Blue1Brown](https://www.3blue1brown.com/) and the Manim Community --- --- ✅ **Star this model** if you find it useful! ---