Spaces:
Running
Running
metadata
title: Json Structured
emoji: π
colorFrom: red
colorTo: gray
sdk: gradio
sdk_version: 5.33.0
app_file: app.py
pinned: false
short_description: Plain text to json using llama.cpp
Plain Text to JSON with llama.cpp
This Hugging Face Space converts plain text into structured JSON format using llama.cpp for efficient CPU inference, powered by the Osmosis Structure 0.6B model.
Features
- llama.cpp Integration: Uses llama-cpp-python for efficient CPU model inference
- Osmosis Structure Model: Specialized 0.6B parameter model for structured data extraction
- Gradio Interface: User-friendly web interface
- JSON Conversion: Converts unstructured text to well-formatted JSON
- Auto-Download: Automatically downloads the Osmosis model on first use
- Demo Mode: Basic functionality without requiring the AI model
Setup
The space automatically installs:
llama-cpp-python
for llama.cpp integration- Required build tools (
build-essential
,cmake
) - Gradio and other dependencies
- Downloads Osmosis Structure 0.6B model (~1.2GB) on first use
Usage
- Quick Start: Run
python setup_and_run.py
for automated setup - Demo Mode: Use "Demo (No Model)" for basic text-to-JSON conversion
- Full Mode: Click "Load Model" to download and use the Osmosis model
- Customize: Adjust temperature and max_tokens for different output styles
Model Details
- Model: Osmosis Structure 0.6B BF16 GGUF
- Repository: https://huggingface.co/osmosis-ai/Osmosis-Structure-0.6B
- Specialization: Structure extraction and JSON generation
- Size: ~1.2GB download
- Format: GGUF (optimized for llama.cpp)
Configuration
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference