Spaces:
Sleeping
Sleeping
GATE Motion Analysis - Deployment Guide
Quick Start Options
Option 1: Test Locally First (Recommended)
# 1. Navigate to deployment folder
cd gradio_deployment
# 2. Install requirements
pip install -r requirements.txt
# 3. Test locally
python app.py
Option 2: Deploy to Hugging Face Spaces
Prerequisites
- Create a Hugging Face account at https://huggingface.co/join
- Create an access token at https://huggingface.co/settings/tokens
- Select "Write" permissions
- Copy the token (starts with
hf_
)
Deployment Steps
# 1. Login to Hugging Face (paste your token when prompted)
huggingface-cli login
# 2. Deploy using Gradio
gradio deploy --title "GATE Motion Analysis" --app-file app.py
# 3. Follow the prompts to create your Space
Performance Optimisations Implemented
β Fixed Issues
- Emojis Removed: Clean British English interface
- Skeleton Overlay: Proper pose detection with visual feedback
- GPU Acceleration: Automatic detection and optimisation
- Streaming FPS: Optimised from 10 FPS to 30 FPS
- Memory Management: Efficient GPU memory usage
π Performance Features
- Dynamic GPU Detection: Automatically uses best available hardware
- YOLOv8 Integration: GPU-accelerated pose detection
- MediaPipe Fallback: CPU compatibility for wider deployment
- Real-time Streaming: 30-60 FPS depending on hardware
- Memory Optimisation: Reduced buffer sizes for instant feedback
Hardware Requirements
Minimum (CPU Only)
- CPU: 4+ cores
- RAM: 8GB system memory
- FPS: ~15 FPS with MediaPipe
Recommended (GPU)
- GPU: CUDA-compatible with 4GB+ VRAM
- CPU: 6+ cores
- RAM: 16GB system memory
- FPS: 60+ FPS with YOLOv8
Optimal (High-end GPU)
- GPU: RTX 3080/4080+ or Tesla V100+
- VRAM: 8GB+ dedicated
- CPU: 8+ cores
- RAM: 32GB system memory
- FPS: 120+ FPS with YOLOv8
Configuration Options
The deployment automatically detects your hardware and optimises settings:
# Automatic configuration based on detected hardware
{
"CPU Only": {
"fps": 15,
"model": "mediapipe",
"threads": 4
},
"GPU Available": {
"fps": 60,
"model": "yolov8s-pose",
"precision": "fp16"
},
"High-end GPU": {
"fps": 120,
"model": "yolov8m-pose",
"precision": "fp16"
}
}
Troubleshooting
Common Issues
Import Errors
pip install -r requirements.txt
CUDA Not Available
- Check GPU drivers
- Install PyTorch with CUDA support
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
Memory Issues
- Reduce batch size in config.py
- Use smaller model (yolov8n-pose.pt)
Low FPS
- Check GPU utilisation
- Reduce image resolution
- Use CPU fallback mode
Performance Monitoring
Monitor your deployment performance:
# Check GPU utilisation
import torch
if torch.cuda.is_available():
print(f"GPU Memory: {torch.cuda.memory_allocated(0)/1e9:.1f}GB")
print(f"GPU Utilisation: {torch.cuda.utilization(0)}%")
Security Considerations
For public deployment:
- API endpoints are disabled
- File access is restricted
- No user authentication required (demo mode)
- HTTPS enabled by default on Hugging Face Spaces
Next Steps
- Test Locally: Start with local testing to verify functionality
- Deploy to Spaces: Use Gradio deploy for public access
- Monitor Performance: Check logs and usage metrics
- Scale if Needed: Consider dedicated GPU instances for high traffic
Support
If you encounter issues:
- Check the deployment logs
- Verify hardware compatibility
- Test with reduced settings
- Contact the development team with specific error messages