JoachimVC's picture
Upload GAIA agent implementation files for assessment
c922f8b

A newer version of the Gradio SDK is available: 5.42.0

Upgrade

Deploying GAIA on Hugging Face Spaces

This guide provides step-by-step instructions for deploying the GAIA agent on Hugging Face Spaces, making it accessible as a web application to users worldwide.

Why Deploy on Hugging Face Spaces?

Hugging Face Spaces offers several advantages for deploying GAIA:

  1. Free Hosting: Basic deployment is free with reasonable usage limits
  2. Easy Sharing: Public URL that can be shared with anyone
  3. Version Control: Built-in Git integration
  4. Secrets Management: Secure storage for API keys
  5. Community: Integration with the broader AI community
  6. Customization: Support for custom domains and branding

Prerequisites

Before deploying to Hugging Face Spaces, you'll need:

  1. Hugging Face Account: Create an account at huggingface.co
  2. API Keys: Gather all necessary API keys (OpenAI, Serper, etc.)
  3. GAIA Repository: A local copy of the GAIA repository
  4. Git: For pushing your code to Hugging Face

Deployment Steps

Step 1: Prepare Your Repository

  1. Clone the GAIA repository if you haven't already:
git clone https://github.com/your-organization/gaia.git
cd gaia
  1. Create a specific requirements.txt file for Hugging Face deployment:
# Copy the main requirements
cp requirements.txt requirements-hf.txt

# Edit to add Gradio (if not already included)
echo "gradio>=4.0.0" >> requirements-hf.txt

# Remove any development-specific packages
# Edit requirements-hf.txt to remove unnecessary packages
  1. Create a Hugging Face-specific app.py or modify the existing one:
import os
import gradio as gr
from src.gaia.agent import GaiaAgent
from src.gaia.config import Configuration

# Initialize configuration
config = Configuration()

# Set default values for Hugging Face deployment
config.set("demo_mode", True)
config.set("models.default", "gpt-3.5-turbo")  # Use cheaper model by default

# Initialize the agent
agent = GaiaAgent(config=config)

# Define the Gradio interface
def process_query(query, history):
    try:
        response = agent.run(query)
        return response
    except Exception as e:
        return f"Error: {str(e)}"

# Create the gradio app
demo = gr.ChatInterface(
    fn=process_query,
    title="GAIA - Grounded AI Alignment Agent",
    description="Ask any question and GAIA will search for information to provide a grounded answer.",
    examples=[
        "What is quantum computing?",
        "Explain the theory of relativity in simple terms.",
        "What are the latest developments in AI safety?"
    ],
    theme="huggingface"
)

# Launch the app
if __name__ == "__main__":
    demo.launch()

Step 2: Create a Space on Hugging Face

  1. Log in to Hugging Face and go to huggingface.co/spaces

  2. Click on "Create new Space"

  3. Fill in the details:

    • Owner: Your username or organization
    • Space name: Choose a unique name (e.g., "gaia-agent")
    • License: Choose an appropriate license (e.g., MIT)
    • SDK: Choose "Gradio"
    • Space hardware: Start with "CPU basic" (free tier)
    • Make this Space private: Optional, if you want to restrict access
  4. Click "Create Space"

Step 3: Configure the Repository

  1. In your local GAIA directory, add the Hugging Face Space as a remote:
git remote add space https://huggingface.co/spaces/your-username/gaia-agent
  1. Create a .gitignore file to exclude unnecessary files:
cat > .gitignore << EOF
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# Virtual Environment
venv/
ENV/
env/

# Local configuration
.env
config.json

# Logs
logs/
*.log

# Results
results/

# IDE files
.idea/
.vscode/
*.swp
*.swo
EOF
  1. Create a README.md file for your Space:
cat > README.md << EOF
# GAIA Agent

GAIA (Grounded AI Alignment) is an AI agent designed to answer questions with grounded, factual information.

## Features

- Web search capabilities using multiple providers
- Academic research integration
- Reasoning tools for complex problems
- Memory system for context-aware responses

## Usage

Simply type your question in the input box and click "Submit" to get a response from GAIA.

## Examples

- "What is quantum computing?"
- "Explain the theory of relativity in simple terms."
- "What are the latest developments in AI safety?"

## About

GAIA is built with Python using LangChain, LangGraph, and GPT-4. It leverages various APIs to provide accurate and up-to-date information.
EOF

Step 4: Create a Requirements File for Hugging Face

Create or modify the requirements.txt file to include only the necessary packages:

cat > requirements.txt << EOF
gradio>=4.0.0
langchain>=0.0.267
langgraph>=0.0.15
openai>=1.1.1
tiktoken>=0.5.1
supabase>=2.0.3
requests>=2.31.0
python-dotenv>=1.0.0
EOF

Step 5: Set Up Environment Variables

In your Hugging Face Space:

  1. Go to the Settings tab of your Space
  2. Scroll down to the "Repository secrets" section
  3. Add your API keys and configuration as secrets:
    • OPENAI_API_KEY: Your OpenAI API key
    • SERPER_API_KEY: Your Serper API key (if used)
    • PERPLEXITY_API_KEY: Your Perplexity API key (if used)
    • Any other necessary API keys or configuration values

Step 6: Push Your Code to Hugging Face

Commit and push your code to the Hugging Face Space:

# Add your files
git add app.py requirements.txt README.md .gitignore src/

# Commit the changes
git commit -m "Initial GAIA deployment on Hugging Face"

# Push to Hugging Face
git push space main

After pushing, Hugging Face will automatically build and deploy your application. This may take a few minutes.

Step 7: Test Your Deployment

  1. Once the build is complete, navigate to your Space's URL: https://huggingface.co/spaces/your-username/gaia-agent
  2. Test the application by asking a few questions
  3. Check the Space logs for any errors or issues:
    • Go to the "Settings" tab
    • Scroll down to "Factory reboot" section
    • Click on "View logs"

Advanced Configuration

Custom Domain

To use a custom domain with your Space:

  1. Go to the "Settings" tab of your Space
  2. Scroll down to the "Custom domain" section
  3. Enter your domain name (e.g., gaia.yourdomain.com)
  4. Follow the instructions to set up DNS records

Upgrading Hardware

For better performance or to handle more traffic:

  1. Go to the "Settings" tab of your Space
  2. Scroll down to the "Space hardware" section
  3. Choose a higher tier (note: this will incur costs)
    • CPU Upgrade: For faster processing
    • GPU: For model hosting or intensive processing
    • Memory Boost: For handling larger datasets

Persistent Storage

To enable persistent storage for your Space:

  1. Go to the "Settings" tab of your Space
  2. Scroll down to the "Persistent storage" section
  3. Enable persistent storage (up to 10GB for free)

This allows you to store data that persists between restarts, such as logs or cached results.

Authentication

To restrict access to authenticated users:

  1. In your app.py, modify the launch parameters:
demo.launch(auth=("username", "password"))
  1. Or for more flexible authentication:
demo.launch(auth_message="Enter GAIA password",
            auth=lambda u, p: p == os.environ.get("GAIA_PASSWORD", "default_password"))
  1. Add the GAIA_PASSWORD environment variable in your Space settings

Scheduled Restarts

For long-running deployments, scheduled restarts can help maintain stability:

  1. Go to the "Settings" tab of your Space
  2. Scroll down to the "Factory reboot" section
  3. Set a schedule for automatic reboots (e.g., daily or weekly)

Optimizing for Hugging Face Spaces

Reducing Startup Time

To reduce startup time and improve user experience:

  1. Lazy-load components when possible:
def load_agent():
    if not hasattr(load_agent, "agent"):
        load_agent.agent = GaiaAgent(config=config)
    return load_agent.agent

def process_query(query, history):
    agent = load_agent()
    return agent.run(query)
  1. Use caching for expensive operations:
import functools

@functools.lru_cache(maxsize=100)
def get_cached_result(query):
    # Expensive operation
    return result

Memory Usage Optimization

To stay within Hugging Face's memory limits:

  1. Limit model usage:
config.set("models.default", "gpt-3.5-turbo")  # Uses less memory than GPT-4
  1. Implement efficient memory management:
# Clear working memory after each query
def process_query(query, history):
    agent = load_agent()
    result = agent.run(query)
    agent.reset(clear_memory=False)  # Clear working memory but keep conversation history
    return result
  1. Disable memory-intensive features in the configuration:
config.set("memory.supabase.enabled", False)  # Use simpler memory system
config.set("tools.image_analysis.enabled", False)  # Disable memory-intensive tools

Monitoring and Maintenance

Monitoring Usage

Monitor your Space's usage and performance:

  1. Go to the "Settings" tab of your Space
  2. Scroll down to the "Metrics" section
  3. View CPU, memory, and disk usage over time

Updating Your Deployment

To update your GAIA deployment:

  1. Make changes to your local repository
  2. Commit and push to your Hugging Face Space:
git add .
git commit -m "Update GAIA deployment"
git push space main

Handling Errors

If you encounter errors in your deployment:

  1. Check the Space logs for error messages
  2. Implement better error handling in your code:
def process_query(query, history):
    try:
        agent = load_agent()
        return agent.run(query)
    except Exception as e:
        # Log the error
        print(f"Error processing query: {str(e)}")
        # Return a user-friendly message
        return "I'm having trouble processing your request. Please try again or ask a different question."
  1. Set up monitoring to be notified of errors

Conclusion

You now have GAIA deployed on Hugging Face Spaces, making it accessible as a web application. This allows you to share your agent with others, collaborate with the community, and benefit from Hugging Face's infrastructure.

For more advanced deployments or custom integrations, consider exploring:

If you encounter any issues or have questions, refer to the troubleshooting section or create an issue on the project's GitHub repository.