# YOLOX Nanodet: YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. YOLOX is a high-performing object detector, an improvement to the existing YOLO series. YOLO series are in constant exploration of techniques to improve the object detection techniques for optimal speed and accuracy trade-off for real-time applications. Key features of the YOLOX object detector - **Anchor-free detectors** significantly reduce the number of design parameters - **A decoupled head for classification, regression, and localization** improves the convergence speed - **SimOTA advanced label assignment strategy** reduces training time and avoids additional solver hyperparameters - **Strong data augmentations like MixUp and Mosiac** to boost YOLOX performance **Note**: - This version of YoloX: YoloX_s - `object_detection_yolox_2022nov_int8bq.onnx` represents the block-quantized version in int8 precision and is generated using [block_quantize.py](../../tools/quantize/block_quantize.py) with `block_size=64`. ## Demo ### Python Run the following command to try the demo: ```shell # detect on camera input python demo.py # detect on an image python demo.py --input /path/to/image -v ``` Note: - image result saved as "result.jpg" - this model requires `opencv-python>=4.8.0` ### C++ Install latest OpenCV and CMake >= 3.24.0 to get started with: ```shell # A typical and default installation path of OpenCV is /usr/local cmake -B build -D OPENCV_INSTALLATION_PATH=/path/to/opencv/installation . cmake --build build # detect on camera input ./build/opencv_zoo_object_detection_yolox # detect on an image ./build/opencv_zoo_object_detection_yolox -m=/path/to/model -i=/path/to/image -v # get help messages ./build/opencv_zoo_object_detection_yolox -h ``` ## Results Here are some of the sample results that were observed using the model (**yolox_s.onnx**),    Check [benchmark/download_data.py](../../benchmark/download_data.py) for the original images. ## Model metrics: The model is evaluated on [COCO 2017 val](https://cocodataset.org/#download). Results are showed below:
Average Precision | Average Recall |
---|---|
| area | IoU | Average Precision(AP) | |:-------|:------|:------------------------| | all | 0.50:0.95 | 0.405 | | all | 0.50 | 0.593 | | all | 0.75 | 0.437 | | small | 0.50:0.95 | 0.232 | | medium | 0.50:0.95 | 0.448 | | large | 0.50:0.95 | 0.541 | | | area | IoU | Average Recall(AR) | |:-------|:------|:----------------| | all | 0.50:0.95 | 0.326 | | all | 0.50:0.95 | 0.531 | | all | 0.50:0.95 | 0.574 | | small | 0.50:0.95 | 0.365 | | medium | 0.50:0.95 | 0.634 | | large | 0.50:0.95 | 0.724 | |