Spaces:
Running
Running
๏ปฟ# ๐ Manual Deployment Guide for Hugging Face Spaces | |
Your OmniAvatar project has been prepared for deployment to Hugging Face Spaces. Since we encountered some authentication issues, here's how to complete the deployment manually: | |
## ๐ Prerequisites | |
1. **Hugging Face Account**: Make sure you have an account at https://huggingface.co/ | |
2. **Access Token**: Generate a write access token from https://huggingface.co/settings/tokens | |
3. **Git**: Ensure Git is installed on your system | |
## ๐ Authentication Setup | |
### Option 1: Using Hugging Face CLI (Recommended) | |
```bash | |
# Install the Hugging Face CLI | |
pip install -U "huggingface_hub[cli]" | |
# Login with your token | |
huggingface-cli login | |
# When prompted, enter your access token from https://huggingface.co/settings/tokens | |
``` | |
### Option 2: Using Git Credentials | |
```bash | |
# Configure git to use your HF token as password | |
git remote set-url origin https://bravedims:YOUR_HF_TOKEN@huggingface.co/spaces/bravedims/AI_Avatar_Chat.git | |
``` | |
## ๐ค Deploy to Hugging Face | |
Once authenticated, push your changes: | |
```bash | |
# Navigate to the deployment directory | |
cd path/to/HF_Deploy/AI_Avatar_Chat | |
# Push to deploy | |
git push origin main | |
``` | |
## ๐ Files Prepared for Deployment | |
Your space now includes: | |
- โ **app.py** - Main application with FastAPI + Gradio interface | |
- โ **requirements.txt** - Optimized dependencies for HF Spaces | |
- โ **Dockerfile** - HF Spaces compatible Docker configuration | |
- โ **README.md** - Comprehensive space documentation | |
- โ **configs/** - Model configuration files | |
- โ **scripts/** - Inference scripts | |
- โ **examples/** - Sample inputs | |
- โ **elevenlabs_integration.py** - TTS integration | |
## ๐ง Space Configuration | |
The space is configured with: | |
- **SDK**: Docker | |
- **Hardware**: T4-medium (GPU enabled) | |
- **Port**: 7860 (required by HF Spaces) | |
- **User**: Non-root user as required by HF | |
- **Base Image**: PyTorch with CUDA support | |
## ๐ฏ Key Features Deployed | |
1. **๐ญ Avatar Generation**: Text-to-avatar with lip-sync | |
2. **๐ฃ๏ธ ElevenLabs TTS**: High-quality text-to-speech | |
3. **๐ต Audio URL Support**: Direct audio file inputs | |
4. **๐ผ๏ธ Image References**: Guide avatar appearance | |
5. **โก GPU Acceleration**: Optimized for HF hardware | |
## ๐ ๏ธ Environment Variables | |
To enable ElevenLabs TTS functionality: | |
1. Go to your Space settings on HF | |
2. Add a secret named `ELEVENLABS_API_KEY` | |
3. Set the value to your ElevenLabs API key | |
## ๐ฎ Testing Your Deployment | |
After deployment: | |
1. Wait for the space to build (may take 10-15 minutes) | |
2. Access your space at: https://huggingface.co/spaces/bravedims/AI_Avatar_Chat | |
3. Test the Gradio interface with sample prompts | |
4. Verify API endpoints work: `/health`, `/generate` | |
## ๐ Monitoring | |
- Check build logs in the HF Space interface | |
- Monitor resource usage and performance | |
- Review user feedback and iterate | |
## ๐ Updating Your Space | |
To make changes: | |
1. Modify files in your local HF_Deploy/AI_Avatar_Chat directory | |
2. Commit changes: `git add . && git commit -m "Update message"` | |
3. Push: `git push origin main` | |
4. HF will automatically rebuild and redeploy | |
## ๐ Troubleshooting | |
- **Build fails**: Check Dockerfile and requirements.txt | |
- **Model not found**: Ensure download_models.sh runs correctly | |
- **Memory issues**: Consider upgrading to larger hardware | |
- **Port conflicts**: Space must use port 7860 | |
--- | |
## ๐ฏ Next Steps | |
1. Complete authentication setup above | |
2. Push to deploy: `git push origin main` | |
3. Configure ElevenLabs API key as secret | |
4. Test and iterate on your deployed space! | |
Your OmniAvatar-14B space is ready for deployment! ๐ | |