Spaces:
Runtime error
Runtime error
File size: 1,883 Bytes
51d8aa9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
# π Deployment Guide
## Deploy to Hugging Face Spaces
### Prerequisites
1. Install Hugging Face CLI:
```bash
pip install huggingface_hub
```
2. Login to Hugging Face:
```bash
huggingface-cli login
```
### Create and Deploy Space
1. **Create a new Space on Hugging Face Hub:**
```bash
huggingface-cli repo create --type space --space_sdk gradio your-username/one-pager-generator
```
2. **Clone and set up the repository:**
```bash
git clone https://huggingface.co/spaces/your-username/one-pager-generator
cd one-pager-generator
```
3. **Copy files to the Space repository:**
```bash
cp ../one-pager/* .
```
4. **Add, commit and push:**
```bash
git add .
git commit -m "Initial commit: AI One-Pager Generator"
git push
```
### Alternative: Direct CLI Upload
You can also use the HF CLI to upload files directly:
```bash
huggingface-cli upload your-username/one-pager-generator . --repo-type=space
```
### Files Required for Deployment
- `app.py` - Main application file
- `requirements.txt` - Python dependencies
- `config.yaml` - Space configuration
- `README.md` - Documentation
- `.gitignore` - Git ignore patterns
### Configuration Notes
- The app uses `distilgpt2` model for better compatibility
- CPU-only inference for free tier compatibility
- Fallback template system ensures reliable output
- Gradio interface optimized for Spaces
### Post-Deployment
After deployment, your Space will be available at:
`https://huggingface.co/spaces/your-username/one-pager-generator`
The app will automatically:
1. Install dependencies from requirements.txt
2. Load the AI model
3. Launch the Gradio interface
4. Be accessible via the web
### Troubleshooting
- **Model loading issues**: The app falls back to structured templates
- **Memory issues**: Using smaller DistilGPT2 model for efficiency
- **Timeout issues**: CPU inference may be slower but more reliable |