{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# *Workflows* dans LlamaIndex\n", "\n", "\n", "Ce notebook fait parti du cours sur les agents d'Hugging Face, un cours gratuit qui vous guidera, du **niveau débutant à expert**, pour comprendre, utiliser et construire des agents.\n", "![Agents course share](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/share.png)\n", "\n", "## Installons les dépendances\n", "\n", "Nous allons installer les dépendances pour cette unité." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install llama-index llama-index-vector-stores-chroma llama-index-utils-workflow llama-index-llms-huggingface-api pyvis -U -q" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Nous allons également nous connecter au Hugging Face Hub pour avoir accès à l'API d'inférence." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from huggingface_hub import login\n", "\n", "login()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Création de *Workflow* de base\n", "\n", "Nous pouvons commencer par créer un *workflow* simple. Nous utilisons les classes `StartEvent` et `StopEvent` pour définir le début et la fin de celui-ci." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Hello, world!'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from llama_index.core.workflow import StartEvent, StopEvent, Workflow, step\n", "\n", "\n", "class MyWorkflow(Workflow):\n", " @step\n", " async def my_step(self, ev: StartEvent) -> StopEvent:\n", " # faire quelque chose ici\n", " return StopEvent(result=\"Hello, world!\")\n", "\n", "\n", "w = MyWorkflow(timeout=10, verbose=False)\n", "result = await w.run()\n", "result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Connecter plusieurs étapes\n", "\n", "Nous pouvons également créer des *workflows* à plusieurs étapes. Ici, nous transmettons les informations relatives à l'événement entre les étapes. Notez que nous pouvons utiliser l'indication de type pour spécifier le type d'événement et le flux du *workflow*." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Finished processing: Step 1 complete'" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from llama_index.core.workflow import Event\n", "\n", "\n", "class ProcessingEvent(Event):\n", " intermediate_result: str\n", "\n", "\n", "class MultiStepWorkflow(Workflow):\n", " @step\n", " async def step_one(self, ev: StartEvent) -> ProcessingEvent:\n", " # Traitement des données initiales\n", " return ProcessingEvent(intermediate_result=\"Step 1 complete\")\n", "\n", " @step\n", " async def step_two(self, ev: ProcessingEvent) -> StopEvent:\n", " # Utiliser le résultat intermédiaire\n", " final_result = f\"Finished processing: {ev.intermediate_result}\"\n", " return StopEvent(result=final_result)\n", "\n", "\n", "w = MultiStepWorkflow(timeout=10, verbose=False)\n", "result = await w.run()\n", "result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Boucles et branches\n", "\n", "Nous pouvons également utiliser l'indication de type pour créer des branches et des boucles. Notez que nous pouvons utiliser l'opérateur `|` pour spécifier que l'étape peut renvoyer plusieurs types." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Bad thing happened\n", "Bad thing happened\n", "Bad thing happened\n", "Good thing happened\n" ] }, { "data": { "text/plain": [ "'Finished processing: First step complete.'" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from llama_index.core.workflow import Event\n", "import random\n", "\n", "\n", "class ProcessingEvent(Event):\n", " intermediate_result: str\n", "\n", "\n", "class LoopEvent(Event):\n", " loop_output: str\n", "\n", "\n", "class MultiStepWorkflow(Workflow):\n", " @step\n", " async def step_one(self, ev: StartEvent | LoopEvent) -> ProcessingEvent | LoopEvent:\n", " if random.randint(0, 1) == 0:\n", " print(\"Bad thing happened\")\n", " return LoopEvent(loop_output=\"Back to step one.\")\n", " else:\n", " print(\"Good thing happened\")\n", " return ProcessingEvent(intermediate_result=\"First step complete.\")\n", "\n", " @step\n", " async def step_two(self, ev: ProcessingEvent) -> StopEvent:\n", " # Utiliser le résultat intermédiaire\n", " final_result = f\"Finished processing: {ev.intermediate_result}\"\n", " return StopEvent(result=final_result)\n", "\n", "\n", "w = MultiStepWorkflow(verbose=False)\n", "result = await w.run()\n", "result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dessiner des *Workflows*\n", "\n", "Nous pouvons également dessiner des *workflows* avec la fonction `draw_all_possible_flows`." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\n", "\n", "workflow_all_flows.html\n" ] } ], "source": [ "from llama_index.utils.workflow import draw_all_possible_flows\n", "\n", "draw_all_possible_flows(w)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![drawing](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/llama-index/workflow-draw.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Gestion d'état\n", "\n", "Au lieu de passer l'information de l'événement entre les étapes, nous pouvons utiliser l'indice de type `Context` pour passer l'information entre les étapes. \n", "Cela peut être utile pour les *workflows* de plus longue durée, où l'on souhaite stocker des informations entre les étapes." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Query: What is the capital of France?\n" ] }, { "data": { "text/plain": [ "'Finished processing: Step 1 complete'" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from llama_index.core.workflow import Event, Context\n", "from llama_index.core.agent.workflow import ReActAgent\n", "\n", "\n", "class ProcessingEvent(Event):\n", " intermediate_result: str\n", "\n", "\n", "class MultiStepWorkflow(Workflow):\n", " @step\n", " async def step_one(self, ev: StartEvent, ctx: Context) -> ProcessingEvent:\n", " # Traitement des données initiales\n", " await ctx.store.set(\"query\", \"What is the capital of France?\")\n", " return ProcessingEvent(intermediate_result=\"Step 1 complete\")\n", "\n", " @step\n", " async def step_two(self, ev: ProcessingEvent, ctx: Context) -> StopEvent:\n", " # Utiliser le résultat intermédiaire\n", " query = await ctx.store.get(\"query\")\n", " print(f\"Query: {query}\")\n", " final_result = f\"Finished processing: {ev.intermediate_result}\"\n", " return StopEvent(result=final_result)\n", "\n", "\n", "w = MultiStepWorkflow(timeout=10, verbose=False)\n", "result = await w.run()\n", "result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## *Multi-Agent Workflows*\n", "\n", "Nous pouvons également créer des flux de travail multi-agents. Ici, nous définissons deux agents, l'un qui multiplie deux entiers et l'autre qui ajoute deux entiers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AgentOutput(response=ChatMessage(role=, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='5 and 3 add up to 8.')]), tool_calls=[ToolCallResult(tool_name='handoff', tool_kwargs={'to_agent': 'add_agent', 'reason': 'The user wants to add two numbers, and the add_agent is better suited for this task.'}, tool_id='831895e7-3502-4642-92ea-8626e21ed83b', tool_output=ToolOutput(content='Agent add_agent is now handling the request due to the following reason: The user wants to add two numbers, and the add_agent is better suited for this task..\n", "Please continue with the current request.', tool_name='handoff', raw_input={'args': (), 'kwargs': {'to_agent': 'add_agent', 'reason': 'The user wants to add two numbers, and the add_agent is better suited for this task.'}}, raw_output='Agent add_agent is now handling the request due to the following reason: The user wants to add two numbers, and the add_agent is better suited for this task..\n", "Please continue with the current request.', is_error=False), return_direct=True), ToolCallResult(tool_name='add', tool_kwargs={'a': 5, 'b': 3}, tool_id='c29dc3f7-eaa7-4ba7-b49b-90908f860cc5', tool_output=ToolOutput(content='8', tool_name='add', raw_input={'args': (), 'kwargs': {'a': 5, 'b': 3}}, raw_output=8, is_error=False), return_direct=False)], raw=ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(role='assistant', content='.', tool_call_id=None, tool_calls=None), index=0, finish_reason=None, logprobs=None)], created=1744553546, id='', model='Qwen/Qwen2.5-Coder-32B-Instruct', system_fingerprint='3.2.1-sha-4d28897', usage=None, object='chat.completion.chunk'), current_agent_name='add_agent')" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from llama_index.core.agent.workflow import AgentWorkflow, ReActAgent\n", "from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n", "from llama_index.core.agent.workflow import AgentWorkflow\n", "\n", "# Définir quelques outils\n", "def add(a: int, b: int) -> int:\n", " \"\"\"Add two numbers.\"\"\"\n", " return a + b\n", "\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "llm = HuggingFaceInferenceAPI(model_name=\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n", "\n", "# nous pouvons passer des fonctions directement sans FunctionTool -- les fn/docstring sont analysés pour le nom/description\n", "multiply_agent = ReActAgent(\n", " name=\"multiply_agent\",\n", " description=\"Is able to multiply two integers\",\n", " system_prompt=\"A helpful assistant that can use a tool to multiply numbers.\",\n", " tools=[multiply], \n", " llm=llm,\n", ")\n", "\n", "addition_agent = ReActAgent(\n", " name=\"add_agent\",\n", " description=\"Is able to add two integers\",\n", " system_prompt=\"A helpful assistant that can use a tool to add numbers.\",\n", " tools=[add], \n", " llm=llm,\n", ")\n", "\n", "# Créer le workflow\n", "workflow = AgentWorkflow(\n", " agents=[multiply_agent, addition_agent],\n", " root_agent=\"multiply_agent\"\n", ")\n", "\n", "# Exécuter le système\n", "response = await workflow.run(user_msg=\"Can you add 5 and 3?\")\n", "response" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.7" } }, "nbformat": 4, "nbformat_minor": 4 }