Spaces:
Running
on
Zero
Running
on
Zero
File size: 3,846 Bytes
6f1fa06 ce1589d 6f1fa06 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
license: apache-2.0
title: UVIS
sdk: gradio
emoji: 🔥
colorFrom: blue
colorTo: indigo
pinned: true
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/6820d348853cd8d544c6b014/qapEjDg69wwVgeqCXWTiX.png
short_description: Unified Visual Intelligence System
allow_embedding: true
---
# UVIS - Unified Visual Intelligence System
### A Lightweight Web-Based Visual Perception Demo
> **Try it online**: [uvis.deecoded.io](https://uvis.deecoded.io)
> **GitHub**: [github.com/DurgaDeepakValluri/UVIS](https://github.com/DurgaDeepakValluri/UVIS)
---
## Overview
**UVIS** (Unified Visual Intelligence System) is a **lightweight, web-based visual perception demo**, originally conceptualized as a **spin-off while building Percepta**—a larger modular perception framework.
The goal of UVIS is to make **scene understanding tools more accessible**, allowing anyone to try object detection, semantic segmentation, and depth estimation through a clean web interface, without requiring local setup.
UVIS currently runs on **[Render.com](https://www.render.com)'s Free Tier**, using **lightweight models** to ensure the experience remains stable on limited resources.
---
## Key Features
| Capability | Description |
| ---------------------------- | ----------------------------------------------------------------------------------- |
| 🟢 **Object Detection** | YOLOv5-Nano & YOLOv5-Small for fast, low-resource detection. |
| 🟢 **Semantic Segmentation** | SegFormer-B0 and DeepLabV3-ResNet50 for general-purpose scenes. |
| 🟢 **Depth Estimation** | MiDaS Small & DPT Lite for per-pixel depth estimation. |
| 🖼️ **Scene Blueprint** | Unified overlay combining all selected tasks. |
| 📊 **Scene Metrics** | Scene complexity scoring and agent-friendly summaries. |
| 📦 **Downloadable Results** | JSON, overlay images, and ZIP bundles. |
| 🌐 **Web-First Design** | No installation needed—hosted live at [uvis.deecoded.io](https://uvis.deecoded.io). |
| 🛠️ **Open Source** | Contribution-friendly, easy to extend and improve. |
---
### Current Limitations & Roadmap
UVIS is designed for **lightweight demos** on **free-tier hosting**, which means:
* Models are optimized for speed and minimal compute.
* Only **image input** is supported at this time.
> As the project grows and higher hosting tiers become available, the roadmap includes:
>
> * **Video input support**
> * **Lightweight SLAM**
> * **Natural language scene descriptions**
> * **Higher-capacity, more accurate models**
---
## Architecture Highlights
* **Modular Python Backend with Model Registry**
* **Streamlit-Based Interactive Web UI**
* **HuggingFace Transformers & TorchVision Integration**
* **Lightweight Model Support (Render-Compatible)**
* **Structured JSON Output for AI Agents**
* **Robust Error Handling and Logging**
---
## 🤝 Contributing
UVIS is **open-source** and welcomes contributions.
You can:
* Suggest new features
* Improve the web interface
* Extend perception tasks
* Report issues or bugs
### 💻 **Clone and Run Locally**
```bash
git clone https://github.com/DurgaDeepakValluri/UVIS.git
cd UVIS
pip install -r requirements.txt
```
---
## 🌐 Live Demo
> **Explore it online at [uvis.deecoded.io](https://uvis.deecoded.io)**
> Upload an image, select your tasks, and view the results—all in your browser.
---
## 📝 License
Apache 2.0 License. Free for personal and commercial use with attribution.
© 2025 Durga Deepak Valluri |