Spaces:
Sleeping
Sleeping
File size: 1,749 Bytes
869e944 293ca21 869e944 4ccc04f 454dc04 b849e62 454dc04 b849e62 454dc04 b849e62 454dc04 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
title: EUDR Retriever
emoji: 🐠
colorFrom: yellow
colorTo: pink
sdk: docker
pinned: false
---
# ChatFed Retriever - MCP Server
A semantic document retrieval and reranking service designed for ChatFed RAG (Retrieval-Augmented Generation) pipelines. This module serves as an **MCP (Model Context Protocol) server** that retrieves semantically similar documents from vector databases with optional cross-encoder reranking.
## MCP Endpoint
The main MCP function is `retrieve_mcp` which provides a top_k retrieval and reranking function when properly connected to an external vector database.
**Parameters**:
- `query` (str, required): The search query text
- `reports_filter` (str, optional): Comma-separated list of specific report filenames
- `sources_filter` (str, optional): Filter by document source type
- `subtype_filter` (str, optional): Filter by document subtype
- `year_filter` (str, optional): Comma-separated list of years to filter by
**Returns**: List of dictionaries containing:
- `answer`: Document content
- `answer_metadata`: Document metadata
- `score`: Relevance score [disabled when reranker used]
**Example useage**:
```python
from gradio_client import Client
client = Client("ENTER CONTAINER URL / SPACE ID")
result = client.predict(
query="...",
reports_filter="",
sources_filter="",
subtype_filter="",
year_filter="",
api_name="/retrieve_mcp"
)
print(result)
```
## Configuration
### Vector Store Configuration
1. Set your data source according to the provider
2. Set the embedding model to match the data source
3. Set the retriever parameters
4. [Optional] Set the reranker parameters
5. Run the app:
```bash
docker build -t chatfed-retriever .
docker run -p 7860:7860 chatfed-retriever
```
|