can this model be converted to onnx?

#3
by bagood - opened

try to convert it using ultralytics and test preditct but got error

from ultralytics import YOLO

# Load a model
model = YOLO("Anzhc_Eyes_seg_hd.pt")  # load an official model

# Export the model
model.export(format="onnx")
Ultralytics 8.3.150 πŸš€ Python-3.11.12 torch-2.6.0+cu124 CPU (Intel Xeon 2.20GHz)
YOLOv8n-seg summary (fused): 85 layers, 3,258,259 parameters, 0 gradients, 12.0 GFLOPs

PyTorch: starting from 'Anzhc_Eyes_seg_hd.pt' with input shape (1, 3, 1024, 1024) BCHW and output shape(s) ((1, 37, 21504), (1, 32, 256, 256)) (6.6 MB)

ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.56...
ONNX: export success βœ… 2.0s, saved as 'Anzhc_Eyes_seg_hd.onnx' (12.9 MB)

Export complete (3.8s)
Results saved to /content
Predict:         yolo predict task=segment model=Anzhc_Eyes_seg_hd.onnx imgsz=1024  
Validate:        yolo val task=segment model=Anzhc_Eyes_seg_hd.onnx imgsz=1024 data=G:\YOLO\EyesInpaint\Data.yaml  
Visualize:       https://netron.app
Anzhc_Eyes_seg_hd.onnx

and when test it

onnx_model_path = 'Anzhc_Eyes_seg_hd.onnx'


# Check if the specified ONNX model file exists
if not os.path.exists(onnx_model_path):
    print(f"Error: ONNX model file not found at '{onnx_model_path}'.")
    print("Please make sure you have uploaded the file and the path is correct.")
    exit()

# 4. Load the ONNX Segmentation Model with Ultralytics
print(f"\nLoading your ONNX model: {onnx_model_path}")
try:
    # Load the .onnx file
    model = YOLO(onnx_model_path) # Ultralytics will use ONNX Runtime
    print("ONNX model loaded successfully.")
except Exception as e:
    print(f"Error loading ONNX model: {e}")
    exit()

# 5. Prepare Your Image (same as before)
# image_url = 'https://ultralytics.com/images/bus.jpg' # Example image
image_url = 'https://i.pinimg.com/originals/c6/e9/18/c6e918b1d62e38bb106110fc57bd12ff.jpg'

try:
    response = requests.get(image_url)
    response.raise_for_status()
    img_array = np.array(bytearray(response.content), dtype=np.uint8)
    original_image_for_masking = cv2.imdecode(img_array, cv2.IMREAD_COLOR)
    if original_image_for_masking is None:
        raise ValueError("Failed to decode image from URL.")
except Exception as e:
    print(f"Error fetching or processing image: {e}")
    exit()

# 6. Perform Inference for a Specific Class (same as before)
target_class_id = 0 # Example: Bus (COCO class ID). Change if your model uses different classes.
print(f"\nPerforming inference with your ONNX model for class ID: {target_class_id}...")
try:
    results = model.predict(source=original_image_for_masking.copy(), classes=[target_class_id], conf=0.3)
except Exception as e:
    print(f"Error during ONNX model prediction: {e}")
    exit()
Loading your ONNX model: Anzhc_Eyes_seg_hd.onnx
WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify','pose' or 'obb'.
ONNX model loaded successfully.

Performing inference with your ONNX model for class ID: 0...
Loading Anzhc_Eyes_seg_hd.onnx for ONNX Runtime inference...
Using ONNX Runtime CPUExecutionProvider
Error during ONNX model prediction: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from Anzhc_Eyes_seg_hd.onnx failed:Protobuf parsing failed.
---------------------------------------------------------------------------

It says that it can't properly define task. For majority of my models proper task would be "segment". Though, im not familiar with protobuf error that goes after. I never used ONNX, so can't help there, sorry. Maybe someone will stumble upon this thread to help, but low chances.

Sign up or log in to comment