A newer version of the Gradio SDK is available:
5.42.0
Local Deployment Guide for GAIA
This guide provides detailed instructions for deploying and running the GAIA agent on a local machine for development, testing, or personal use.
Prerequisites
Before deploying GAIA locally, ensure you have the following:
Python Environment:
- Python 3.9 or higher installed
- pip (Python package manager)
- (Optional) virtualenv or conda for environment isolation
API Keys:
- OpenAI API key for language models
- Additional API keys based on your configuration (Serper, Perplexity, etc.)
System Requirements:
- At least 4GB of RAM
- At least 2GB of free disk space
- Internet connection for API calls
Installation
Step 1: Clone the Repository
# Clone the GAIA repository
git clone https://github.com/your-organization/gaia.git
cd gaia
Step 2: Set Up Python Environment
# Option 1: Create a virtual environment with venv
python -m venv venv
# Activate the virtual environment
# On Windows
venv\Scripts\activate
# On macOS/Linux
source venv/bin/activate
# Option 2: Create a conda environment
conda create -n gaia python=3.9
conda activate gaia
Step 3: Install Dependencies
# Install required packages
pip install -r requirements.txt
# For development, install development dependencies as well
pip install -r requirements-dev.txt
Step 4: Configure Environment Variables
Create a .env
file in the root directory of the project:
# Create .env file from template
cp .env.example .env
Edit the .env
file with your API keys and configuration:
# API Keys
OPENAI_API_KEY=your-openai-api-key
SERPER_API_KEY=your-serper-api-key
PERPLEXITY_API_KEY=your-perplexity-api-key
# Optional: Supabase configuration for memory
SUPABASE_URL=your-supabase-url
SUPABASE_KEY=your-supabase-key
# Agent Configuration
MODEL_NAME=gpt-4o
VERBOSE=true
# UI Configuration
DEMO_MODE=true
SIMPLE_UI=false
Basic Deployment
Running the Web Interface
The simplest way to deploy GAIA locally is to run the web interface:
# Start the web interface
python app.py
This will start a Gradio web server that you can access at http://localhost:7860
in your browser.
Running in Demo Mode
For quick testing without setting up authentication:
# Enable demo mode
export DEMO_MODE=true # On Windows: set DEMO_MODE=true
python app.py
Running with a Simplified UI
For a more streamlined interface:
# Enable simplified UI
export SIMPLE_UI=true # On Windows: set SIMPLE_UI=true
python app.py
Command-Line Usage
GAIA can also be used directly from the command line:
# Run a single query
python -m src.gaia.cli "What is quantum computing?"
# Run in interactive mode
python -m src.gaia.cli --interactive
# Run with specific configuration
python -m src.gaia.cli --model "gpt-3.5-turbo" --verbose "What is climate change?"
Advanced Configuration
Custom Configuration File
For more advanced configuration, create a custom configuration file:
# Create a config.json file
cat > config.json << EOF
{
"api": {
"openai": {
"api_key": "your-openai-key"
},
"serper": {
"api_key": "your-serper-key"
}
},
"models": {
"default": "gpt-4o",
"fallback": "gpt-3.5-turbo"
},
"tools": {
"web_search": {
"enabled": true,
"default_provider": "serper"
},
"academic_search": {
"enabled": true
}
},
"memory": {
"supabase": {
"enabled": false
}
}
}
EOF
# Run with custom configuration
python app.py --config config.json
Enabling Memory with Supabase
To use Supabase for persistent memory:
- Create a Supabase project at https://supabase.com
- Create the required tables using the provided SQL script:
# Copy the SQL script
cp docs/deployment/schema/supabase_tables.sql ./supabase_setup.sql
# Manually execute this in your Supabase SQL editor
# or use the Supabase CLI
- Update your
.env
file with Supabase credentials:
SUPABASE_URL=your-supabase-url
SUPABASE_KEY=your-supabase-key
SUPABASE_MEMORY_ENABLED=true
Running as a Service
Using Systemd (Linux)
To run GAIA as a background service on Linux using systemd:
- Create a systemd service file:
sudo nano /etc/systemd/system/gaia.service
- Add the following content:
[Unit]
Description=GAIA Assessment Agent
After=network.target
[Service]
User=your-username
WorkingDirectory=/path/to/gaia
Environment="PATH=/path/to/gaia/venv/bin"
ExecStart=/path/to/gaia/venv/bin/python app.py
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
- Enable and start the service:
sudo systemctl enable gaia
sudo systemctl start gaia
- Check service status:
sudo systemctl status gaia
Using PM2 (Cross-platform)
For a more flexible service manager that works across platforms:
- Install PM2:
npm install -g pm2
- Create an ecosystem file:
cat > ecosystem.config.js << EOF
module.exports = {
apps: [{
name: "gaia",
script: "app.py",
interpreter: "./venv/bin/python",
env: {
OPENAI_API_KEY: "your-openai-key",
SERPER_API_KEY: "your-serper-key",
MODEL_NAME: "gpt-4o",
VERBOSE: "true"
}
}]
}
EOF
- Start with PM2:
pm2 start ecosystem.config.js
- Monitor and manage:
pm2 status
pm2 logs gaia
pm2 restart gaia
Docker Deployment
GAIA can also be deployed using Docker for better isolation and portability:
Step 1: Create a Dockerfile
cat > Dockerfile << EOF
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 7860
CMD ["python", "app.py"]
EOF
Step 2: Create a Docker Compose File
cat > docker-compose.yml << EOF
version: '3'
services:
gaia:
build: .
ports:
- "7860:7860"
environment:
- OPENAI_API_KEY=your-openai-key
- SERPER_API_KEY=your-serper-key
- PERPLEXITY_API_KEY=your-perplexity-key
- MODEL_NAME=gpt-4o
- VERBOSE=true
- DEMO_MODE=true
volumes:
- ./logs:/app/logs
EOF
Step 3: Build and Run with Docker Compose
# Build the Docker image
docker-compose build
# Run the container
docker-compose up -d
# Check logs
docker-compose logs -f
Performance Optimization
Memory Usage
To optimize memory usage:
# Limit result cache size
export MEMORY_RESULT_CACHE_SIZE=100
# Set a shorter TTL for cached results (in seconds)
export MEMORY_TTL=1800 # 30 minutes
CPU Usage
For lower CPU usage:
# Disable verbose logging
export VERBOSE=false
# Use a lighter model
export MODEL_NAME=gpt-3.5-turbo
# Limit the number of tools enabled
export WEB_SEARCH_ENABLED=true
export ACADEMIC_SEARCH_ENABLED=false
Troubleshooting
Common Issues
API Key Issues:
Error: Authentication error with OpenAI API
Solution: Check that your API key is correct and has sufficient credits.
Port Conflicts:
Error: Address already in use
Solution: Change the port using an environment variable:
export PORT=7861 python app.py
Missing Dependencies:
ImportError: No module named 'some_package'
Solution: Ensure all dependencies are installed:
pip install -r requirements.txt
Memory Issues:
MemoryError or Process killed
Solution: Limit memory usage as described in the Performance Optimization section.
Logging
Enable detailed logging for troubleshooting:
# Enable debug logging
export LOG_LEVEL=DEBUG
python app.py
Log files are stored in the logs/
directory by default.
Diagnostic Commands
Use these commands to diagnose issues:
# Check environment variables
python -c "import os; print(os.environ.get('OPENAI_API_KEY', 'Not set'))"
# Test API connections
python -m src.gaia.utils.cli.verify_connections
# Test memory connections
python -m src.gaia.utils.cli.verify_memory
Security Considerations
When deploying GAIA locally, consider these security practices:
API Key Management:
- Store API keys in environment variables or a secure
.env
file - Never commit API keys to version control
- Consider using a secret management solution for production
- Store API keys in environment variables or a secure
Network Security:
- By default, the web interface only listens on localhost
- To expose to other machines, use
--host 0.0.0.0
with caution - Consider using a reverse proxy with authentication for wider access
Data Privacy:
- Be aware of what data is being sent to external APIs
- Consider privacy implications when using memory features
- Regularly clear cached data for sensitive applications
Upgrading
To upgrade your GAIA installation:
# Pull the latest changes
git pull
# Update dependencies
pip install -r requirements.txt
# Run migration scripts if available
python -m src.gaia.utils.cli.run_migrations
# Restart the service
# If using systemd:
sudo systemctl restart gaia
# If using PM2:
pm2 restart gaia
Conclusion
You now have GAIA running locally on your machine. For more advanced deployment options, check out the Hugging Face Deployment Guide or explore the API documentation to integrate GAIA into your own applications.
For any issues or questions, please refer to the troubleshooting section or create an issue on the project's GitHub repository.