File size: 1,384 Bytes
4cfe630
 
 
 
 
 
 
 
 
6dfa42c
4cfe630
 
6dfa42c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4cfe630
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
title: Json Structured
emoji: πŸƒ
colorFrom: red
colorTo: gray
sdk: gradio
sdk_version: 5.33.0
app_file: app.py
pinned: false
short_description: Plain text to json using llama.cpp
---

# Plain Text to JSON with llama.cpp

This Hugging Face Space converts plain text into structured JSON format using llama.cpp for efficient CPU inference.

## Features

- **llama.cpp Integration**: Uses llama-cpp-python for efficient model inference
- **Gradio Interface**: User-friendly web interface
- **JSON Conversion**: Converts unstructured text to structured JSON
- **Model Management**: Load and manage GGUF models
- **Demo Mode**: Basic functionality without requiring a model

## Setup

The space automatically installs:
- `llama-cpp-python` for llama.cpp integration
- Required build tools (`build-essential`, `cmake`)
- Gradio and other dependencies

## Usage

1. **Demo Mode**: Use "Demo (No Model)" for basic text-to-JSON conversion
2. **Full Mode**: Load a GGUF model for AI-powered conversion
3. **Customize**: Adjust temperature and max_tokens for different outputs

## Model Requirements

- Models must be in GGUF format
- Recommended: Small to medium-sized models for better performance
- Popular options: Llama 2, CodeLlama, or other instruction-tuned models

## Configuration

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference