Spaces:
Sleeping
Sleeping
File size: 3,493 Bytes
121d1f4 4ede07c 121d1f4 4ede07c 7a0a8c5 73039d9 7a0a8c5 4ede07c 7a0a8c5 24d6d7e 4ede07c 7a0a8c5 121d1f4 7a0a8c5 4ede07c 7a0a8c5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 |
---
title: MASX OpenChat
emoji: π€
colorFrom: indigo
colorTo: blue
sdk: docker
pinned: false
app_file: app.py
---
# MASX OpenChat LLM
**A FastAPI service that brings the OpenChat-3.5 language model to life through a clean, scalable REST API.**
## What is this?
MASX LLM OpenChat-3.5 model.
### π Key Features
- **Powered by OpenChat-3.5**: Latest conversational AI model with 7B parameters
- **FastAPI + Docker**: Clean, modular, and containerized
- **Easy integration**: REST API ready for real-world apps
---
## π Quick Start
### Requirements
- **8GB+ RAM** (16GB+ recommended)
- **GPU with 8GB+ VRAM** (optional but faster)
### Install dependencies
```bash
pip install -r requirements.txt
```
### Config
```bash
cp env.example .env
# Edit .env with your preferred settings
```
### Start the server
```bash
python app.py
```
**That's it!** Your AI service is now running at `http://localhost:8080`
## Use
### Basic Chat Request
```bash
curl -X POST "http://localhost:8080/chat" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Hello! Can you help me write a Python function?",
"max_tokens": 256,
"temperature": 0.7
}'
```
### Response Format
```json
{
"response": "Of course! I'd be happy to help you write a Python function. What kind of function would you like to create? Please let me know what it should do, and I'll help you implement it with proper syntax and best practices."
}
```
### API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/status` | GET | Check service health and get model info |
| `/chat` | POST | Generate AI responses |
| `/docs` | GET | Interactive API documentation (Swagger UI) |
| `/redoc` | GET | Alternative API documentation |
### Request Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `prompt` | string | **required** | Your input text/question |
| `max_tokens` | integer | 256 | Maximum tokens to generate |
| `temperature` | float | 0.0 | Creativity level (0.0 = deterministic, 2.0 = very creative) |
## π§ Configuration
The service is highly configurable through environment variables. Copy `env.example` to `.env` and customize:
### Essential Settings
```bash
# Server Configuration
HOST=0.0.0.0
PORT=8080
LOG_LEVEL=info
```
### Advanced S
## π³ Docker Deployment
## π Monitoring & Health
### Health Check
```bash
curl http://localhost:8080/status
```
Response:
```json
{
"status": "ok",
"max_tokens": 4096
}
```
### Logs
The service provides comprehensive logging:
- **Application logs**: `./logs/app.log`
- **Console output**: Real-time server logs
- **Error tracking**: Detailed error information with stack traces
## π οΈ Development
### Project Structure
```
masx-openchat-llm/
βββ app.py # FastAPI application
βββ model_loader.py # Model loading and configuration
βββ requirements.txt # Python dependencies
βββ .env.example # Environment variables template
βββ .gitignore # Git ignore rules
βββ README.md # This file
```
### Adding Features
1. **New Endpoints**: Add routes in `app.py`
2. **Model Configuration**: Modify `model_loader.py`
3. **Dependencies**: Update `requirements.txt`
4. **Environment Variables**: Add to `env.example`
---
**Made by the MASX AI **
*Ready to build the future of AI-powered applications? Start with MASX OpenChat LLM!*
|