HiVG: Hierarchical SVG Tokenization

HiVG-3B-Base is a 3B-parameter vision-language model for autoregressive Scalable Vector Graphics (SVG) generation.

arXiv Project Page HuggingFace Paper Page

HiVG introduces a novel hierarchical SVG tokenization framework that replaces generic byte-level tokenization with geometry-aware atomic and segment tokens, enabling significantly more efficient and faithful SVG code generation.

Highlights

  • Small Model, Frontier Results — 3B parameters that beat 7/7 proprietary models including GPT-5 and Gemini 2.5 on image-to-SVG.
  • Efficient SVG Token Compression — Hierarchical tokenization (Raw SVG → Atomic tokens → Segment tokens) with 2.76x sequence compression.
  • High-Fidelity Image-to-SVG — Convert any image into a clean, editable SVG — structure, layout, and detail faithfully preserved.

Quick Start

You can use the provided inference pipeline for both image-to-SVG and text-to-SVG tasks.

from hivg_infer import HiSVGInferencePipeline

pipeline = HiSVGInferencePipeline(
    model_path="xingxm/HiVG-3B-Base",
    coord_range=234,
    temperature=0.7,
    top_p=0.9,
    max_new_tokens=4096,
)

# Image-to-SVG
result = pipeline.img2svg("path/to/your_image.png")
if result["success"]:
    print(result["svg"])

# Text-to-SVG
result = pipeline.text2svg("A minimalist black phone icon with an outline style")
if result["success"]:
    with open("output.svg", "w") as f:
        f.write(result["svg"])

Note: For detailed inference code, data preprocessing, and the hierarchical SVG tokenizer/detokenizer, please visit the project page and the associated code repository.

Intended Uses

Primary Use Cases

  • Text-to-SVG Generation: Generate SVG vector graphics from natural language descriptions.
  • Image-to-SVG Generation (Vectorization): Convert raster images into editable SVG code.

Out-of-Scope Uses

  • This is a base model and has not been instruction-tuned or RLHF-aligned for production deployment.
  • Not designed for generating arbitrary code beyond SVG.
  • Not suitable for safety-critical applications without additional safeguards.

Training Details

Training Procedure

  • Backbone: Qwen2.5-VL-3B
  • Fine-tuning: Full-parameter SFT with frozen vision encoder
  • Curriculum Learning: The model was trained with a curriculum training paradigm that progressively increases program complexity
  • Initialization: Hierarchical mean-noise initialization strategy for new SVG token embeddings

Compute Infrastructure

Please refer to the paper for detailed compute specifications.

Citation

If you find this work helpful, please cite:

@article{xing2026hivg,
  title={Hierarchical SVG Tokenization: Learning Compact Visual Programs for Scalable Vector Graphics Modeling},
  author={Ximing Xing and Ziteng Xue and Zhenxi Li and Weicong Liang and Linqing Wang and Zhantao Yang and Tiankai Hang and Zijin Yin and Qinglin Lu and Chunyu Wang and Qian Yu},
  journal={arXiv preprint arXiv:2604.05072},
  year={2026}
}
Downloads last month
83
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train xingxm/HiVG-3B-Base

Paper for xingxm/HiVG-3B-Base