myspace134v / README.md
rdune71's picture
Update with enhanced AI Research Assistant - streaming output, 8192 tokens, improved UI
001a1f0
---
title: AI Research Assistant
sdk: gradio
sdk_version: 4.38.1
app_file: app.py
license: apache-2.0
---
# 🧠 AI Research Assistant
An advanced AI-powered research assistant that combines web search capabilities with contextual awareness to provide comprehensive answers to complex questions.
## 🌟 Key Features
- **Real-time Streaming Output**: See responses as they're generated for immediate feedback
- **Contextual Awareness**: Incorporates current weather and space weather data
- **Web Search Integration**: Powered by Tavily API for up-to-date information
- **Smart Caching**: Redis-based caching for faster repeated queries
- **Intelligent Server Monitoring**: Clear guidance during model warm-up periods
- **Accurate Citations**: Real sources extracted from search results
- **Asynchronous Processing**: Parallel execution for optimal performance
- **Responsive Interface**: Modern Gradio UI with example queries
## πŸ—οΈ Architecture
The application follows a modular architecture for maintainability and scalability:
myspace134v/
β”œβ”€β”€ app.py # Main Gradio interface
β”œβ”€β”€ modules/
β”‚ β”œβ”€β”€ analyzer.py # LLM interaction with streaming
β”‚ β”œβ”€β”€ citation.py # Citation generation and formatting
β”‚ β”œβ”€β”€ context_enhancer.py # Weather and space context (async)
β”‚ β”œβ”€β”€ formatter.py # Response formatting
β”‚ β”œβ”€β”€ input_handler.py # Input validation
β”‚ β”œβ”€β”€ retriever.py # Web search with Tavily
β”‚ β”œβ”€β”€ server_cache.py # Redis caching
β”‚ β”œβ”€β”€ server_monitor.py # Server health monitoring
β”‚ β”œβ”€β”€ status_logger.py # Event logging
β”‚ β”œβ”€β”€ visualizer.py # Output rendering
β”‚ └── visualize_uptime.py # System uptime monitoring
β”œβ”€β”€ tests/ # Unit tests
β”œβ”€β”€ requirements.txt # Dependencies
└── version.json # Version tracking
## πŸ€– AI Model Information
This assistant uses the **DavidAU/OpenAi-GPT-oss-20b-abliterated-uncensored-NEO-Imatrix-gguf** model hosted on Hugging Face Endpoints. This is a powerful open-source language model with:
- **20 Billion Parameters**: Capable of handling complex reasoning tasks
- **Extended Context Window**: Supports up to 8192 tokens per response
- **Uncensored Capabilities**: Provides comprehensive answers without artificial limitations
- **Specialized Training**: Optimized for research and analytical tasks
## πŸ”§ API Integrations
| Service | Purpose | Usage |
|---------|---------|-------|
| **Tavily** | Web Search | Real-time information retrieval |
| **Hugging Face Inference** | LLM Processing | Natural language understanding |
| **Redis** | Caching | Performance optimization |
| **NASA** | Space Data | Astronomical context |
| **OpenWeatherMap** | Weather Data | Environmental context |
## ⚑ Enhanced Features
### πŸ” Streaming Output
Responses stream in real-time, allowing users to start reading before the complete answer is generated. This creates a more natural conversational experience.
### πŸ“š Dynamic Citations
All information is properly sourced with clickable links to original content, ensuring transparency and enabling further exploration.
### ⚑ Asynchronous Operations
Weather data, space weather, and web searches run in parallel, significantly reducing response times.
### 🧠 Contextual Intelligence
Each query is enhanced with:
- Current weather conditions
- Recent space events
- Accurate timestamps
### πŸ›‘οΈ Server State Management
Intelligent monitoring detects when the model server is initializing and provides clear user guidance with estimated wait times.
## πŸš€ Getting Started
### Prerequisites
- Python 3.8+
- Hugging Face account and token
- API keys for Tavily, NASA, and OpenWeatherMap
- Redis instance for caching
### Setup Instructions
1. Clone the repository
2. Set up required environment variables:
```bash
export HF_TOKEN="your_hugging_face_token"
export TAVILY_API_KEY="your_tavily_api_key"
export REDIS_HOST="your_redis_host"
export REDIS_PORT="your_redis_port"
export REDIS_USERNAME="your_redis_username"
export REDIS_PASSWORD="your_redis_password"
export NASA_API_KEY="your_nasa_api_key"
export OPENWEATHER_API_KEY="your_openweather_api_key"
Install dependencies:
pip install -r requirements.txt
Run the application:
python app.py
πŸ“Š System Monitoring
The assistant includes built-in monitoring capabilities:
Server Health Tracking: Detects and reports server state changes
Performance Metrics: Logs request processing times
Uptime Monitoring: Tracks system availability
Failure Recovery: Automatic handling of transient errors
πŸ“‹ Example Queries
Try these sample questions to see the assistant in action:
"What are the latest developments in fusion energy research?"
"How does climate change impact global food security?"
"Explain the significance of recent Mars rover discoveries"
"What are the economic implications of AI advancement?"
πŸ“„ License
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
🀝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
πŸ“ž Support
For issues, questions, or feedback, please open an issue on the repository.