--- license: apache-2.0 language: - es base_model: meta-llama/Llama-3.3-70B-Instruct tags: - peft - lora - whatsapp - text-splitting - message-segmentation - spanish - fine-tuned library_name: peft pipeline_tag: text-generation model_type: llama widget: - text: "buenos dias novedades?" example_title: "Greeting + Question" - text: "perfecto que haces?" example_title: "Confirmation + Question" - text: "aqui andamos que haces?" example_title: "Status + Question" --- # 📱 WazapSplitter-LLM Splits text into natural WhatsApp-style message segments. **Input:** `"buenos dias queria confirmar la hora de la reunion"` **Output:** `["buenos días", "quería confirmar la hora de la reunión"]` ## Quick Usage ### TypeScript/JavaScript ```typescript async function splitMessage(text: string): Promise { const prompt = `Split messages at natural breaks into JSON array. Common patterns: greeting+question, statement+question, topic+followup. Keep original words, only add logical splits. User: ${text} Assistant:`; const response = await fetch("https://api-inference.huggingface.co/models/joseAndres777/WazapSplitter-LLM", { method: "POST", headers: { "Authorization": "Bearer YOUR_HF_TOKEN", "Content-Type": "application/json" }, body: JSON.stringify({ inputs: prompt, parameters: { max_new_tokens: 100, temperature: 0.3 } }) }); const data = await response.json(); return JSON.parse(data[0].generated_text); } // Example const segments = await splitMessage("hola como estas que tal todo?"); console.log(segments); // ["hola", "como estas", "que tal todo?"] ``` ### Chatbot Integration ```typescript // Make responses feel more human const segments = await splitMessage(botResponse); for (const segment of segments) { await sendMessage(segment); await delay(1000 + Math.random() * 2000); // Human-like timing } ```