# User Guide ## Sidebar - **Model**: Select among HF, OpenAI, Gemini, Groq, and Fireworks models. - **Input**: Describe your app or paste code/text. - **Generate**: Click to invoke the AI pipeline. ## Tabs - **Code**: View generated code (editable). - **Preview**: Live HTML preview (for web outputs). - **History**: Conversation log with assistant. ## Files & Plugins - Upload reference files (PDF, DOCX, images) for extraction. - Use **Plugins** to integrate GitHub, Slack, DB queries, etc. --- ```markdown # API Reference ## `models.py` ### `ModelInfo` - `name: str` - `id: str` - `description: str` - `default_provider: str` ### `find_model(identifier: str) -> Optional[ModelInfo]` ## `inference.py` ### `chat_completion(model_id, messages, provider=None, max_tokens=4096) -> str` ### `stream_chat_completion(model_id, messages, provider=None, max_tokens=4096) -> Generator[str]` --- ```markdown # Architecture user └─> Gradio UI ──> app.py ├─> models.py (registry) ├─> inference.py (routing) ├─> hf_client.py (clients) ├─> plugins.py (extension) └─> deploy.py (HF Spaces) markdown Copy Edit - **Data flow**: UI → `generation_code` → `inference.chat_completion` → HF/OpenAI/Gemini/Groq → UI - **Extensibility**: Add new models in `models.py`; add providers in `hf_client.py`; add integrations via `plugins/` --- That covers all test suites, CI config, and core docs. Let me know if you’d like any adjustments!