Spaces:
Running
Running
File size: 5,184 Bytes
01b9b5d 737a822 001a1f0 d5f869d 001a1f0 6948d18 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 ab6d29f 001a1f0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
title: AI Research Assistant
sdk: gradio
sdk_version: 4.38.1
app_file: app.py
license: apache-2.0
---
# π§ AI Research Assistant
An advanced AI-powered research assistant that combines web search capabilities with contextual awareness to provide comprehensive answers to complex questions.
## π Key Features
- **Real-time Streaming Output**: See responses as they're generated for immediate feedback
- **Contextual Awareness**: Incorporates current weather and space weather data
- **Web Search Integration**: Powered by Tavily API for up-to-date information
- **Smart Caching**: Redis-based caching for faster repeated queries
- **Intelligent Server Monitoring**: Clear guidance during model warm-up periods
- **Accurate Citations**: Real sources extracted from search results
- **Asynchronous Processing**: Parallel execution for optimal performance
- **Responsive Interface**: Modern Gradio UI with example queries
## ποΈ Architecture
The application follows a modular architecture for maintainability and scalability:
myspace134v/
βββ app.py # Main Gradio interface
βββ modules/
β βββ analyzer.py # LLM interaction with streaming
β βββ citation.py # Citation generation and formatting
β βββ context_enhancer.py # Weather and space context (async)
β βββ formatter.py # Response formatting
β βββ input_handler.py # Input validation
β βββ retriever.py # Web search with Tavily
β βββ server_cache.py # Redis caching
β βββ server_monitor.py # Server health monitoring
β βββ status_logger.py # Event logging
β βββ visualizer.py # Output rendering
β βββ visualize_uptime.py # System uptime monitoring
βββ tests/ # Unit tests
βββ requirements.txt # Dependencies
βββ version.json # Version tracking
## π€ AI Model Information
This assistant uses the **DavidAU/OpenAi-GPT-oss-20b-abliterated-uncensored-NEO-Imatrix-gguf** model hosted on Hugging Face Endpoints. This is a powerful open-source language model with:
- **20 Billion Parameters**: Capable of handling complex reasoning tasks
- **Extended Context Window**: Supports up to 8192 tokens per response
- **Uncensored Capabilities**: Provides comprehensive answers without artificial limitations
- **Specialized Training**: Optimized for research and analytical tasks
## π§ API Integrations
| Service | Purpose | Usage |
|---------|---------|-------|
| **Tavily** | Web Search | Real-time information retrieval |
| **Hugging Face Inference** | LLM Processing | Natural language understanding |
| **Redis** | Caching | Performance optimization |
| **NASA** | Space Data | Astronomical context |
| **OpenWeatherMap** | Weather Data | Environmental context |
## β‘ Enhanced Features
### π Streaming Output
Responses stream in real-time, allowing users to start reading before the complete answer is generated. This creates a more natural conversational experience.
### π Dynamic Citations
All information is properly sourced with clickable links to original content, ensuring transparency and enabling further exploration.
### β‘ Asynchronous Operations
Weather data, space weather, and web searches run in parallel, significantly reducing response times.
### π§ Contextual Intelligence
Each query is enhanced with:
- Current weather conditions
- Recent space events
- Accurate timestamps
### π‘οΈ Server State Management
Intelligent monitoring detects when the model server is initializing and provides clear user guidance with estimated wait times.
## π Getting Started
### Prerequisites
- Python 3.8+
- Hugging Face account and token
- API keys for Tavily, NASA, and OpenWeatherMap
- Redis instance for caching
### Setup Instructions
1. Clone the repository
2. Set up required environment variables:
```bash
export HF_TOKEN="your_hugging_face_token"
export TAVILY_API_KEY="your_tavily_api_key"
export REDIS_HOST="your_redis_host"
export REDIS_PORT="your_redis_port"
export REDIS_USERNAME="your_redis_username"
export REDIS_PASSWORD="your_redis_password"
export NASA_API_KEY="your_nasa_api_key"
export OPENWEATHER_API_KEY="your_openweather_api_key"
Install dependencies:
pip install -r requirements.txt
Run the application:
python app.py
π System Monitoring
The assistant includes built-in monitoring capabilities:
Server Health Tracking: Detects and reports server state changes
Performance Metrics: Logs request processing times
Uptime Monitoring: Tracks system availability
Failure Recovery: Automatic handling of transient errors
π Example Queries
Try these sample questions to see the assistant in action:
"What are the latest developments in fusion energy research?"
"How does climate change impact global food security?"
"Explain the significance of recent Mars rover discoveries"
"What are the economic implications of AI advancement?"
π License
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
π€ Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
π Support
For issues, questions, or feedback, please open an issue on the repository.
|