GEPA-Prompt-Tuner / README.md
TuringsSolutions's picture
Update README.md
3a3607e verified

A newer version of the Gradio SDK is available: 5.42.0

Upgrade
metadata
license: mit
title: GEPA Prompt Tuner
emoji: 🏒
sdk: gradio
sdk_version: 5.38.2

title: GEPA Prompt Optimizer emoji: πŸ§¬πŸ€– colorFrom: blue colorTo: green sdk: gradio sdk_version: 4.31.0 python_version: 3.10 app_file: app.py pinned: false license: mit GEPA Prompt Optimizer for Hugging Face Models This Space is a functional implementation of the GEPA (Genetic-Pareto) framework, as described in the paper "GEPA: REFLECTIVE PROMPT EVOLUTION CAN OUTPERFORM REINFORCEMENT LEARNING".

It allows you to automatically optimize a prompt for a target model (like google/gemma-2b-it) by having it "learn" from feedback on a small training set.

🧬 How It Works The application uses an evolutionary approach to refine a "seed" prompt over several iterations:

Selection: It selects a promising prompt from its current pool of candidates using a Pareto-based strategy, which favors diversity.

Rollout: It runs the selected prompt on a task using your target Hugging Face model (e.g., Gemma).

Reflection: It uses a powerful "reflector" model (Google's Gemini 1.5 Flash) to analyze the prompt's performance, the output, and detailed feedback.

Mutation: The reflector model proposes a new, improved prompt designed to fix the observed failures.

Evaluation: The new prompt is evaluated, and if it shows improvement, it is added to the candidate pool.

This cycle repeats until the defined "rollout budget" is exhausted, leaving you with the best-performing prompt.

πŸš€ How to Use Provide API Keys:

Hugging Face API Token: Enter your Hugging Face token. This is used to run inference on the target model you want to optimize for.

Google Gemini API Key: Enter your Gemini API key. This is required for the powerful "reflection" step. You can get a key from Google AI Studio.

Configure the Optimization:

Target Model ID: The Hugging Face model you want to create a prompt for (e.g., google/gemma-2b-it).

Initial Seed Prompt: The starting prompt. Your goal is to improve this!

Training Data: A small JSON dataset with input fields and evaluation criteria (e.g., expected_keywords). You must adapt this for your specific task.

Budget: The total number of times the target model will be called. Higher budgets allow for more refinement but take longer.

Start Optimization: Click the button and watch the logs to see the evolutionary process in action! The best prompt found will update in real-time.