Spaces:
Running
Running
title: Json Structured | |
emoji: π | |
colorFrom: red | |
colorTo: gray | |
sdk: gradio | |
sdk_version: 5.33.0 | |
app_file: app.py | |
pinned: false | |
short_description: Plain text to json using llama.cpp | |
# Plain Text to JSON with llama.cpp | |
This Hugging Face Space converts plain text into structured JSON format using llama.cpp for efficient CPU inference. | |
## Features | |
- **llama.cpp Integration**: Uses llama-cpp-python for efficient model inference | |
- **Gradio Interface**: User-friendly web interface | |
- **JSON Conversion**: Converts unstructured text to structured JSON | |
- **Model Management**: Load and manage GGUF models | |
- **Demo Mode**: Basic functionality without requiring a model | |
## Setup | |
The space automatically installs: | |
- `llama-cpp-python` for llama.cpp integration | |
- Required build tools (`build-essential`, `cmake`) | |
- Gradio and other dependencies | |
## Usage | |
1. **Demo Mode**: Use "Demo (No Model)" for basic text-to-JSON conversion | |
2. **Full Mode**: Load a GGUF model for AI-powered conversion | |
3. **Customize**: Adjust temperature and max_tokens for different outputs | |
## Model Requirements | |
- Models must be in GGUF format | |
- Recommended: Small to medium-sized models for better performance | |
- Popular options: Llama 2, CodeLlama, or other instruction-tuned models | |
## Configuration | |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference | |