Instructions to use ApacheOne/expimodel with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ApacheOne/expimodel with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("ApacheOne/expimodel", dtype="auto") - Notebooks
- Google Colab
- Kaggle
HiDream-O1-Image-Dev ModelOpt Upload
This repo was uploaded from Colab.
Included folders:
modelopt-v4/โ ModelOpt NVFP4 conversion output folder.merged-load-v3/โ temporary merged-load HF folder used for loading/conversion.
Notes:
- This is experimental.
- Check whether
modelopt-v4/hf_quant_config.jsonexists. - If missing, the ModelOpt export did not create a clean unified HF quantized checkpoint.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support