 
				Dell Pro AI Studio
Model for Dell Pro AI studio
  Updated • 4 Updated • 4- Note Sample python script for running Whisper-Base-Small on CPU and NPU. This model is intended for RAI 1.5. This is a multilingual model and is not intended for production. The officially supported ASR can be found here: https://github.com/amd/RyzenAI-SW/tree/main/demo/ASR/Whisper 
   - amd/NPU-Nomic-embed-text-v1.5-ryzen-strix-cppUpdated • 1- Note CPP implementation to test nomic inference latency. The onnx model and caches are compatible with RAI 1.4 This means that the RAI 1.4 conda environment needs to be activated, and the RYZEN_AI_INSTALLATION_PATH and XLNX_VART_FIRMWARE environment variables must be set 
   - amd/NPU-ESRGAN-ryzen-strix-cppUpdated • 2- Note CPP script to run ESRGAN inference (upsampling). The onnx models and caches are compatible with RAI 1.4. No performance measurements RAI 1.4 conda environment needs to be activated, and the RYZEN_AI_INSTALLATION_PATH and XLNX_VART_FIRMWARE environment variables must be set ESRGAN inference on a 1x250x250x3 png (creates a 1x1000x100x3) png. 
   - amd/NPU-CLIP-PythonUpdated • 1- Note Python implementation of CLIP inference. The onnx models and caches are compatible with RAI 1.5 
   - amd/RyzenAI-1.5-ESRGAN-Inference-ryzen-strix-cppUpdated- Note Optimized RAI 1.5 ESRGAN inference. Timers included. OpenCV used for image manipulation. 95 - 105 ms per inference expected. 
 - amd/RAI_1.6_CS_ORT_InfernceUpdated • 18- Note Application code, onnx models, and caches for Yolo, Whisper-small-en, and CLIP models. For validation of AMD C# ORT bindings. Inference can be run independently from any RAI install, using Nuget packages, but noting that this corresponds with RAI 1.6. 
   - amd/RAI_1.6_CS_OGA_InferenceUpdated- Note C# implementation of interactive LLM prompt running Phi-4-mini-instruct. Uses 1.6.0 Nuget package. 
