auto-diffuser-config / DEBUG_GUIDE.md
chansung's picture
Upload folder using huggingface_hub
aae35f1 verified
# Debug Guide for Auto Diffusers Config
This guide explains how to use the comprehensive debug logging system built into Auto Diffusers Config.
## Quick Start
### Enable Debug Logging
Set environment variables to control debug behavior:
```bash
# Enable debug logging
export DEBUG_LEVEL=DEBUG
export LOG_TO_FILE=true
export LOG_TO_CONSOLE=true
# Run the application
python launch_gradio.py
```
### Debug Levels
- `DEBUG`: Most verbose, shows all operations
- `INFO`: Normal operations and status updates
- `WARNING`: Potential issues and fallbacks
- `ERROR`: Errors and failures only
## Log Files
When `LOG_TO_FILE=true`, logs are saved to the `logs/` directory:
- `auto_diffusers_YYYYMMDD_HHMMSS.log` - Complete application log
- `errors_YYYYMMDD_HHMMSS.log` - Error-only log for quick issue identification
## Component-Specific Debugging
### Hardware Detection
```python
import logging
from hardware_detector import HardwareDetector
logging.basicConfig(level=logging.DEBUG)
detector = HardwareDetector()
detector.print_specs()
```
**Debug Output Includes:**
- System platform and architecture detection
- GPU vendor identification (NVIDIA/AMD/Apple/Intel)
- VRAM measurement attempts
- PyTorch/CUDA/MPS availability checks
- Optimization profile selection logic
### Memory Calculator
```python
import logging
from simple_memory_calculator import SimpleMemoryCalculator
logging.basicConfig(level=logging.DEBUG)
calculator = SimpleMemoryCalculator()
result = calculator.get_model_memory_requirements("black-forest-labs/FLUX.1-schnell")
```
**Debug Output Includes:**
- Model memory lookup (known vs API estimation)
- HuggingFace API calls and responses
- File size analysis for unknown models
- Memory recommendation calculations
- Cache hit/miss operations
### AI Code Generation
```python
import logging
from auto_diffusers import AutoDiffusersGenerator
logging.basicConfig(level=logging.DEBUG)
generator = AutoDiffusersGenerator(api_key="your_key")
code = generator.generate_optimized_code(
model_name="black-forest-labs/FLUX.1-schnell",
prompt_text="A cat",
use_manual_specs=True,
manual_specs={...}
)
```
**Debug Output Includes:**
- Hardware specification processing
- Optimization profile selection
- Gemini API prompt construction
- API request/response timing
- Generated code length and validation
### Gradio Interface
```python
import logging
from gradio_app import GradioAutodiffusers
logging.basicConfig(level=logging.DEBUG)
app = GradioAutodiffusers()
```
**Debug Output Includes:**
- Component initialization status
- User input validation
- Model setting updates
- Interface event handling
## Environment Variables
Control debug behavior without modifying code:
```bash
# Debug level (DEBUG, INFO, WARNING, ERROR)
export DEBUG_LEVEL=DEBUG
# File logging (true/false)
export LOG_TO_FILE=true
# Console logging (true/false)
export LOG_TO_CONSOLE=true
# API key (masked in logs for security)
export GOOGLE_API_KEY=your_api_key_here
```
## Debug Utilities
### System Information Logging
```python
from debug_config import log_system_info
log_system_info()
```
Logs:
- Operating system and architecture
- Python version and executable path
- Environment variables (non-sensitive)
- Working directory and process ID
### Session Boundary Marking
```python
from debug_config import log_session_end
log_session_end()
```
Creates clear session boundaries in log files for easier analysis.
## Common Debug Scenarios
### 1. API Connection Issues
**Problem:** Gemini API failures
**Debug Command:**
```bash
DEBUG_LEVEL=DEBUG LOG_TO_FILE=true python -c "
from auto_diffusers import AutoDiffusersGenerator
import logging
logging.basicConfig(level=logging.DEBUG)
gen = AutoDiffusersGenerator('test_key')
"
```
**Look For:**
- API key validation messages
- Network connection attempts
- HTTP response codes and errors
### 2. Hardware Detection Problems
**Problem:** Wrong optimization profile selected
**Debug Command:**
```bash
DEBUG_LEVEL=DEBUG python -c "
from hardware_detector import HardwareDetector
import logging
logging.basicConfig(level=logging.DEBUG)
detector = HardwareDetector()
print('Profile:', detector.get_optimization_profile())
"
```
**Look For:**
- GPU detection via nvidia-smi
- PyTorch CUDA/MPS availability
- VRAM measurement calculations
- Profile selection logic
### 3. Memory Calculation Issues
**Problem:** Incorrect memory recommendations
**Debug Command:**
```bash
DEBUG_LEVEL=DEBUG python -c "
from simple_memory_calculator import SimpleMemoryCalculator
import logging
logging.basicConfig(level=logging.DEBUG)
calc = SimpleMemoryCalculator()
result = calc.get_model_memory_requirements('your_model_id')
"
```
**Look For:**
- Model lookup in known database
- HuggingFace API calls and file parsing
- Memory calculation formulas
- Recommendation generation logic
### 4. Code Generation Problems
**Problem:** Suboptimal generated code
**Debug Command:**
```bash
DEBUG_LEVEL=DEBUG python launch_gradio.py
```
**Look For:**
- Hardware specs passed to AI
- Optimization profile selection
- Prompt construction details
- API response processing
## Performance Debugging
### Timing Analysis
Enable timestamp logging to identify performance bottlenecks:
```python
import logging
import time
logger = logging.getLogger(__name__)
start_time = time.time()
# Your operation here
duration = time.time() - start_time
logger.info(f"Operation completed in {duration:.2f} seconds")
```
### Memory Usage Tracking
Monitor memory consumption during processing:
```python
import psutil
import logging
logger = logging.getLogger(__name__)
process = psutil.Process()
memory_before = process.memory_info().rss / 1024 / 1024 # MB
# Your operation here
memory_after = process.memory_info().rss / 1024 / 1024 # MB
logger.info(f"Memory usage: {memory_before:.1f}MB -> {memory_after:.1f}MB (Δ{memory_after-memory_before:+.1f}MB)")
```
## Log Analysis Tips
### 1. Filter by Component
```bash
grep "auto_diffusers" logs/auto_diffusers_*.log
grep "hardware_detector" logs/auto_diffusers_*.log
grep "simple_memory_calculator" logs/auto_diffusers_*.log
```
### 2. Error-Only View
```bash
grep "ERROR" logs/auto_diffusers_*.log
# Or use the dedicated error log
cat logs/errors_*.log
```
### 3. Timing Analysis
```bash
grep "seconds" logs/auto_diffusers_*.log
```
### 4. API Interactions
```bash
grep -i "gemini\|api" logs/auto_diffusers_*.log
```
## Troubleshooting Common Issues
### Issue: No logs generated
**Solution:** Check write permissions for `logs/` directory
### Issue: Too verbose output
**Solution:** Set `DEBUG_LEVEL=INFO` or `LOG_TO_CONSOLE=false`
### Issue: Missing log files
**Solution:** Ensure `LOG_TO_FILE=true` and check disk space
### Issue: Logs consuming too much space
**Solution:** Implement log rotation or clean old logs periodically
## Custom Debug Configuration
Create a custom debug setup for specific needs:
```python
from debug_config import setup_debug_logging, configure_component_loggers
import logging
# Custom setup
setup_debug_logging(log_level='INFO', log_to_file=True, log_to_console=False)
# Modify specific component verbosity
logging.getLogger('simple_memory_calculator').setLevel(logging.DEBUG)
logging.getLogger('gradio').setLevel(logging.WARNING)
```
## Security Notes
- API keys are automatically masked in logs (shown as length only)
- Sensitive user inputs are not logged
- Personal hardware information is logged for debugging but can be disabled
- Log files may contain model names and prompts - consider this for privacy
## Getting Help
When reporting issues, include:
1. Debug level used (`DEBUG_LEVEL`)
2. Relevant log snippets from error and main log files
3. System information from `log_system_info()` output
4. Steps to reproduce the issue
The comprehensive logging system makes it easy to identify and resolve issues quickly!