File size: 657 Bytes
be34995
 
 
89e530b
be34995
 
 
 
 
 
 
 
 
a01ec2d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---
license: mit
pipeline_tag: text-generation
tags: [ONNX, ONNXRuntime, phi4, nlp, conversational, custom_code]
inference: false
---

Based on https://huggingface.co/microsoft/Phi-4-mini-instruct

Convert ONNX model by using https://github.com/microsoft/onnxruntime-genai

Using command: python -m onnxruntime_genai.models.builder -m microsoft/Phi-4-mini-instruct -o Phi-4-mini-instruct-onnx -e webgpu -c cache-dir -p int4 --extra_options int4_block_size=32 int4_accuracy_level=4

The generated external data (model_q4f16.onnx_data) is larger than 2GB, which is not suitable for ORT-Web. I use an additional Python script to move some data into model.onnx.