Delete checkpoints
Browse files
fr/unit2/llama-index/.ipynb_checkpoints/agents-checkpoint.ipynb
DELETED
@@ -1,334 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {
|
6 |
-
"vscode": {
|
7 |
-
"languageId": "plaintext"
|
8 |
-
}
|
9 |
-
},
|
10 |
-
"source": [
|
11 |
-
"# Agents dans LlamaIndex\n",
|
12 |
-
"\n",
|
13 |
-
"Ce notebook fait parti du cours <a href=\"https://huggingface.co/learn/agents-course/fr\">sur les agents d'Hugging Face</a>, un cours gratuit qui vous guidera, du **niveau débutant à expert**, pour comprendre, utiliser et construire des agents.\n",
|
14 |
-
"\n",
|
15 |
-
"\n",
|
16 |
-
"\n",
|
17 |
-
"## Installons les dépendances\n",
|
18 |
-
"\n",
|
19 |
-
"Nous allons installer les dépendances pour cette unité."
|
20 |
-
]
|
21 |
-
},
|
22 |
-
{
|
23 |
-
"cell_type": "code",
|
24 |
-
"execution_count": null,
|
25 |
-
"metadata": {},
|
26 |
-
"outputs": [],
|
27 |
-
"source": [
|
28 |
-
"!pip install llama-index llama-index-vector-stores-chroma llama-index-llms-huggingface-api llama-index-embeddings-huggingface -U -q"
|
29 |
-
]
|
30 |
-
},
|
31 |
-
{
|
32 |
-
"cell_type": "markdown",
|
33 |
-
"metadata": {},
|
34 |
-
"source": [
|
35 |
-
"Nous allons également nous connecter au Hugging Face Hub pour avoir accès à l'API d'inférence."
|
36 |
-
]
|
37 |
-
},
|
38 |
-
{
|
39 |
-
"cell_type": "code",
|
40 |
-
"execution_count": null,
|
41 |
-
"metadata": {},
|
42 |
-
"outputs": [],
|
43 |
-
"source": [
|
44 |
-
"from huggingface_hub import login\n",
|
45 |
-
"\n",
|
46 |
-
"login()"
|
47 |
-
]
|
48 |
-
},
|
49 |
-
{
|
50 |
-
"cell_type": "markdown",
|
51 |
-
"metadata": {
|
52 |
-
"vscode": {
|
53 |
-
"languageId": "plaintext"
|
54 |
-
}
|
55 |
-
},
|
56 |
-
"source": [
|
57 |
-
"## Initialisation des agents\n",
|
58 |
-
"\n",
|
59 |
-
"Commençons par initialiser un agent. Nous allons utiliser la classe de base `AgentWorkflow` pour créer un agent."
|
60 |
-
]
|
61 |
-
},
|
62 |
-
{
|
63 |
-
"cell_type": "code",
|
64 |
-
"execution_count": null,
|
65 |
-
"metadata": {},
|
66 |
-
"outputs": [],
|
67 |
-
"source": [
|
68 |
-
"from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n",
|
69 |
-
"from llama_index.core.agent.workflow import AgentWorkflow, ToolCallResult, AgentStream\n",
|
70 |
-
"\n",
|
71 |
-
"\n",
|
72 |
-
"def add(a: int, b: int) -> int:\n",
|
73 |
-
" \"\"\"Add two numbers\"\"\"\n",
|
74 |
-
" return a + b\n",
|
75 |
-
"\n",
|
76 |
-
"\n",
|
77 |
-
"def subtract(a: int, b: int) -> int:\n",
|
78 |
-
" \"\"\"Subtract two numbers\"\"\"\n",
|
79 |
-
" return a - b\n",
|
80 |
-
"\n",
|
81 |
-
"\n",
|
82 |
-
"def multiply(a: int, b: int) -> int:\n",
|
83 |
-
" \"\"\"Multiply two numbers\"\"\"\n",
|
84 |
-
" return a * b\n",
|
85 |
-
"\n",
|
86 |
-
"\n",
|
87 |
-
"def divide(a: int, b: int) -> int:\n",
|
88 |
-
" \"\"\"Divide two numbers\"\"\"\n",
|
89 |
-
" return a / b\n",
|
90 |
-
"\n",
|
91 |
-
"\n",
|
92 |
-
"llm = HuggingFaceInferenceAPI(model_name=\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n",
|
93 |
-
"\n",
|
94 |
-
"agent = AgentWorkflow.from_tools_or_functions(\n",
|
95 |
-
" tools_or_functions=[subtract, multiply, divide, add],\n",
|
96 |
-
" llm=llm,\n",
|
97 |
-
" system_prompt=\"You are a math agent that can add, subtract, multiply, and divide numbers using provided tools.\",\n",
|
98 |
-
")"
|
99 |
-
]
|
100 |
-
},
|
101 |
-
{
|
102 |
-
"cell_type": "markdown",
|
103 |
-
"metadata": {},
|
104 |
-
"source": [
|
105 |
-
"Ensuite, nous pouvons exécuter l'agent et obtenir la réponse et le raisonnement qui sous-tend les appels à l'outil."
|
106 |
-
]
|
107 |
-
},
|
108 |
-
{
|
109 |
-
"cell_type": "code",
|
110 |
-
"execution_count": null,
|
111 |
-
"metadata": {},
|
112 |
-
"outputs": [],
|
113 |
-
"source": [
|
114 |
-
"handler = agent.run(\"What is (2 + 2) * 2?\")\n",
|
115 |
-
"async for ev in handler.stream_events():\n",
|
116 |
-
" if isinstance(ev, ToolCallResult):\n",
|
117 |
-
" print(\"\")\n",
|
118 |
-
" print(\"Called tool: \", ev.tool_name, ev.tool_kwargs, \"=>\", ev.tool_output)\n",
|
119 |
-
" elif isinstance(ev, AgentStream): # montrer le processus de réflexion\n",
|
120 |
-
" print(ev.delta, end=\"\", flush=True)\n",
|
121 |
-
"\n",
|
122 |
-
"resp = await handler\n",
|
123 |
-
"resp"
|
124 |
-
]
|
125 |
-
},
|
126 |
-
{
|
127 |
-
"cell_type": "markdown",
|
128 |
-
"metadata": {},
|
129 |
-
"source": [
|
130 |
-
"De la même manière, nous pouvons transmettre l'état et le contexte à l'agent."
|
131 |
-
]
|
132 |
-
},
|
133 |
-
{
|
134 |
-
"cell_type": "code",
|
135 |
-
"execution_count": 27,
|
136 |
-
"metadata": {},
|
137 |
-
"outputs": [
|
138 |
-
{
|
139 |
-
"data": {
|
140 |
-
"text/plain": [
|
141 |
-
"AgentOutput(response=ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='Your name is Bob.')]), tool_calls=[], raw={'id': 'chatcmpl-B5sDHfGpSwsVyzvMVH8EWokYwdIKT', 'choices': [{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None}, 'finish_reason': 'stop', 'index': 0, 'logprobs': None}], 'created': 1740739735, 'model': 'gpt-4o-2024-08-06', 'object': 'chat.completion.chunk', 'service_tier': 'default', 'system_fingerprint': 'fp_eb9dce56a8', 'usage': None}, current_agent_name='Agent')"
|
142 |
-
]
|
143 |
-
},
|
144 |
-
"execution_count": 27,
|
145 |
-
"metadata": {},
|
146 |
-
"output_type": "execute_result"
|
147 |
-
}
|
148 |
-
],
|
149 |
-
"source": [
|
150 |
-
"from llama_index.core.workflow import Context\n",
|
151 |
-
"\n",
|
152 |
-
"ctx = Context(agent)\n",
|
153 |
-
"\n",
|
154 |
-
"response = await agent.run(\"My name is Bob.\", ctx=ctx)\n",
|
155 |
-
"response = await agent.run(\"What was my name again?\", ctx=ctx)\n",
|
156 |
-
"response"
|
157 |
-
]
|
158 |
-
},
|
159 |
-
{
|
160 |
-
"cell_type": "markdown",
|
161 |
-
"metadata": {},
|
162 |
-
"source": [
|
163 |
-
"## Création d'agents de RAG avec QueryEngineTools\n",
|
164 |
-
"\n",
|
165 |
-
"Réutilisons maintenant le `QueryEngine` que nous avons défini dans [l'unité précédente sur les outils](/tools.ipynb) et convertissons-le en un `QueryEngineTool`. Nous allons le passer à la classe `AgentWorkflow` pour créer un agent de RAG."
|
166 |
-
]
|
167 |
-
},
|
168 |
-
{
|
169 |
-
"cell_type": "code",
|
170 |
-
"execution_count": 46,
|
171 |
-
"metadata": {},
|
172 |
-
"outputs": [],
|
173 |
-
"source": [
|
174 |
-
"import chromadb\n",
|
175 |
-
"\n",
|
176 |
-
"from llama_index.core import VectorStoreIndex\n",
|
177 |
-
"from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n",
|
178 |
-
"from llama_index.embeddings.huggingface import HuggingFaceEmbedding\n",
|
179 |
-
"from llama_index.core.tools import QueryEngineTool\n",
|
180 |
-
"from llama_index.vector_stores.chroma import ChromaVectorStore\n",
|
181 |
-
"\n",
|
182 |
-
"# Créer un vector store\n",
|
183 |
-
"db = chromadb.PersistentClient(path=\"./alfred_chroma_db\")\n",
|
184 |
-
"chroma_collection = db.get_or_create_collection(\"alfred\")\n",
|
185 |
-
"vector_store = ChromaVectorStore(chroma_collection=chroma_collection)\n",
|
186 |
-
"\n",
|
187 |
-
"# Créer un moteur de recherche\n",
|
188 |
-
"embed_model = HuggingFaceEmbedding(model_name=\"BAAI/bge-small-en-v1.5\")\n",
|
189 |
-
"llm = HuggingFaceInferenceAPI(model_name=\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n",
|
190 |
-
"index = VectorStoreIndex.from_vector_store(\n",
|
191 |
-
" vector_store=vector_store, embed_model=embed_model\n",
|
192 |
-
")\n",
|
193 |
-
"query_engine = index.as_query_engine(llm=llm)\n",
|
194 |
-
"query_engine_tool = QueryEngineTool.from_defaults(\n",
|
195 |
-
" query_engine=query_engine,\n",
|
196 |
-
" name=\"personas\",\n",
|
197 |
-
" description=\"descriptions for various types of personas\",\n",
|
198 |
-
" return_direct=False,\n",
|
199 |
-
")\n",
|
200 |
-
"\n",
|
201 |
-
"# Créer un agent de RAG\n",
|
202 |
-
"query_engine_agent = AgentWorkflow.from_tools_or_functions(\n",
|
203 |
-
" tools_or_functions=[query_engine_tool],\n",
|
204 |
-
" llm=llm,\n",
|
205 |
-
" system_prompt=\"You are a helpful assistant that has access to a database containing persona descriptions. \",\n",
|
206 |
-
")"
|
207 |
-
]
|
208 |
-
},
|
209 |
-
{
|
210 |
-
"cell_type": "markdown",
|
211 |
-
"metadata": {},
|
212 |
-
"source": [
|
213 |
-
"Et nous pouvons une fois de plus obtenir la réponse et le raisonnement derrière les appels d'outils."
|
214 |
-
]
|
215 |
-
},
|
216 |
-
{
|
217 |
-
"cell_type": "code",
|
218 |
-
"execution_count": null,
|
219 |
-
"metadata": {},
|
220 |
-
"outputs": [],
|
221 |
-
"source": [
|
222 |
-
"handler = query_engine_agent.run(\n",
|
223 |
-
" \"Search the database for 'science fiction' and return some persona descriptions.\"\n",
|
224 |
-
")\n",
|
225 |
-
"async for ev in handler.stream_events():\n",
|
226 |
-
" if isinstance(ev, ToolCallResult):\n",
|
227 |
-
" print(\"\")\n",
|
228 |
-
" print(\"Called tool: \", ev.tool_name, ev.tool_kwargs, \"=>\", ev.tool_output)\n",
|
229 |
-
" elif isinstance(ev, AgentStream): # montrer le processus de réflexion\n",
|
230 |
-
" print(ev.delta, end=\"\", flush=True)\n",
|
231 |
-
"\n",
|
232 |
-
"resp = await handler\n",
|
233 |
-
"resp"
|
234 |
-
]
|
235 |
-
},
|
236 |
-
{
|
237 |
-
"cell_type": "markdown",
|
238 |
-
"metadata": {},
|
239 |
-
"source": [
|
240 |
-
"## Créer des systèmes multi-agents\n",
|
241 |
-
"\n",
|
242 |
-
"Nous pouvons également créer des systèmes multi-agents en passant plusieurs agents à la classe `AgentWorkflow`."
|
243 |
-
]
|
244 |
-
},
|
245 |
-
{
|
246 |
-
"cell_type": "code",
|
247 |
-
"execution_count": null,
|
248 |
-
"metadata": {},
|
249 |
-
"outputs": [],
|
250 |
-
"source": [
|
251 |
-
"from llama_index.core.agent.workflow import (\n",
|
252 |
-
" AgentWorkflow,\n",
|
253 |
-
" ReActAgent,\n",
|
254 |
-
")\n",
|
255 |
-
"\n",
|
256 |
-
"\n",
|
257 |
-
"# Définir quelques outils\n",
|
258 |
-
"def add(a: int, b: int) -> int:\n",
|
259 |
-
" \"\"\"Add two numbers.\"\"\"\n",
|
260 |
-
" return a + b\n",
|
261 |
-
"\n",
|
262 |
-
"\n",
|
263 |
-
"def subtract(a: int, b: int) -> int:\n",
|
264 |
-
" \"\"\"Subtract two numbers.\"\"\"\n",
|
265 |
-
" return a - b\n",
|
266 |
-
"\n",
|
267 |
-
"\n",
|
268 |
-
"# Créer les configurations de l'agent\n",
|
269 |
-
"# NOTE : nous pouvons utiliser FunctionAgent ou ReActAgent ici.\n",
|
270 |
-
"# FunctionAgent fonctionne pour les LLM avec une API d'appel de fonction.\n",
|
271 |
-
"# ReActAgent fonctionne pour n'importe quel LLM.\n",
|
272 |
-
"calculator_agent = ReActAgent(\n",
|
273 |
-
" name=\"calculator\",\n",
|
274 |
-
" description=\"Performs basic arithmetic operations\",\n",
|
275 |
-
" system_prompt=\"You are a calculator assistant. Use your tools for any math operation.\",\n",
|
276 |
-
" tools=[add, subtract],\n",
|
277 |
-
" llm=llm,\n",
|
278 |
-
")\n",
|
279 |
-
"\n",
|
280 |
-
"query_agent = ReActAgent(\n",
|
281 |
-
" name=\"info_lookup\",\n",
|
282 |
-
" description=\"Looks up information about XYZ\",\n",
|
283 |
-
" system_prompt=\"Use your tool to query a RAG system to answer information about XYZ\",\n",
|
284 |
-
" tools=[query_engine_tool],\n",
|
285 |
-
" llm=llm,\n",
|
286 |
-
")\n",
|
287 |
-
"\n",
|
288 |
-
"# Créer et exécuter le workflow\n",
|
289 |
-
"agent = AgentWorkflow(agents=[calculator_agent, query_agent], root_agent=\"calculator\")\n",
|
290 |
-
"\n",
|
291 |
-
"# Exécuter le système\n",
|
292 |
-
"handler = agent.run(user_msg=\"Can you add 5 and 3?\")"
|
293 |
-
]
|
294 |
-
},
|
295 |
-
{
|
296 |
-
"cell_type": "code",
|
297 |
-
"execution_count": null,
|
298 |
-
"metadata": {},
|
299 |
-
"outputs": [],
|
300 |
-
"source": [
|
301 |
-
"async for ev in handler.stream_events():\n",
|
302 |
-
" if isinstance(ev, ToolCallResult):\n",
|
303 |
-
" print(\"\")\n",
|
304 |
-
" print(\"Called tool: \", ev.tool_name, ev.tool_kwargs, \"=>\", ev.tool_output)\n",
|
305 |
-
" elif isinstance(ev, AgentStream): # showing the thought process\n",
|
306 |
-
" print(ev.delta, end=\"\", flush=True)\n",
|
307 |
-
"\n",
|
308 |
-
"resp = await handler\n",
|
309 |
-
"resp"
|
310 |
-
]
|
311 |
-
}
|
312 |
-
],
|
313 |
-
"metadata": {
|
314 |
-
"kernelspec": {
|
315 |
-
"display_name": "Python 3 (ipykernel)",
|
316 |
-
"language": "python",
|
317 |
-
"name": "python3"
|
318 |
-
},
|
319 |
-
"language_info": {
|
320 |
-
"codemirror_mode": {
|
321 |
-
"name": "ipython",
|
322 |
-
"version": 3
|
323 |
-
},
|
324 |
-
"file_extension": ".py",
|
325 |
-
"mimetype": "text/x-python",
|
326 |
-
"name": "python",
|
327 |
-
"nbconvert_exporter": "python",
|
328 |
-
"pygments_lexer": "ipython3",
|
329 |
-
"version": "3.12.7"
|
330 |
-
}
|
331 |
-
},
|
332 |
-
"nbformat": 4,
|
333 |
-
"nbformat_minor": 4
|
334 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
fr/unit2/llama-index/.ipynb_checkpoints/components-checkpoint.ipynb
DELETED
The diff for this file is too large to render.
See raw diff
|
|
fr/unit2/llama-index/.ipynb_checkpoints/tools-checkpoint.ipynb
DELETED
@@ -1,274 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {},
|
6 |
-
"source": [
|
7 |
-
"# Outils dans LlamaIndex\n",
|
8 |
-
"\n",
|
9 |
-
"\n",
|
10 |
-
"Ce notebook fait parti du cours <a href=\"https://huggingface.co/learn/agents-course/fr\">sur les agents d'Hugging Face</a>, un cours gratuit qui vous guidera, du **niveau débutant à expert**, pour comprendre, utiliser et construire des agents.\n",
|
11 |
-
"\n",
|
12 |
-
"\n",
|
13 |
-
"\n",
|
14 |
-
"## Installons les dépendances\n",
|
15 |
-
"\n",
|
16 |
-
"Nous allons installer les dépendances pour cette unité."
|
17 |
-
]
|
18 |
-
},
|
19 |
-
{
|
20 |
-
"cell_type": "code",
|
21 |
-
"execution_count": null,
|
22 |
-
"metadata": {},
|
23 |
-
"outputs": [],
|
24 |
-
"source": [
|
25 |
-
"!pip install llama-index llama-index-vector-stores-chroma llama-index-llms-huggingface-api llama-index-embeddings-huggingface llama-index-tools-google -U -q"
|
26 |
-
]
|
27 |
-
},
|
28 |
-
{
|
29 |
-
"cell_type": "markdown",
|
30 |
-
"metadata": {},
|
31 |
-
"source": [
|
32 |
-
"Nous allons également nous connecter au Hugging Face Hub pour avoir accès à l'API d'inférence."
|
33 |
-
]
|
34 |
-
},
|
35 |
-
{
|
36 |
-
"cell_type": "code",
|
37 |
-
"execution_count": null,
|
38 |
-
"metadata": {},
|
39 |
-
"outputs": [],
|
40 |
-
"source": [
|
41 |
-
"from huggingface_hub import login\n",
|
42 |
-
"\n",
|
43 |
-
"login()"
|
44 |
-
]
|
45 |
-
},
|
46 |
-
{
|
47 |
-
"cell_type": "markdown",
|
48 |
-
"metadata": {},
|
49 |
-
"source": [
|
50 |
-
"## Créer un *FunctionTool*\n",
|
51 |
-
"\n",
|
52 |
-
"Créons un objet `FunctionTool` de base et appelons-le."
|
53 |
-
]
|
54 |
-
},
|
55 |
-
{
|
56 |
-
"cell_type": "code",
|
57 |
-
"execution_count": 4,
|
58 |
-
"metadata": {},
|
59 |
-
"outputs": [],
|
60 |
-
"source": [
|
61 |
-
"from llama_index.core.tools import FunctionTool\n",
|
62 |
-
"\n",
|
63 |
-
"\n",
|
64 |
-
"def get_weather(location: str) -> str:\n",
|
65 |
-
" \"\"\"Useful for getting the weather for a given location.\"\"\"\n",
|
66 |
-
" print(f\"Getting weather for {location}\")\n",
|
67 |
-
" return f\"The weather in {location} is sunny\"\n",
|
68 |
-
"\n",
|
69 |
-
"\n",
|
70 |
-
"tool = FunctionTool.from_defaults(\n",
|
71 |
-
" get_weather,\n",
|
72 |
-
" name=\"my_weather_tool\",\n",
|
73 |
-
" description=\"Useful for getting the weather for a given location.\",\n",
|
74 |
-
")\n",
|
75 |
-
"tool.call(\"New York\")"
|
76 |
-
]
|
77 |
-
},
|
78 |
-
{
|
79 |
-
"cell_type": "markdown",
|
80 |
-
"metadata": {},
|
81 |
-
"source": [
|
82 |
-
"## Créer un *QueryEngineTool*\n",
|
83 |
-
"\n",
|
84 |
-
"Réutilisons maintenant le `QueryEngine` que nous avons défini dans la section [précédente sur les outils](/tools.ipynb) et convertissons-le en `QueryEngineTool`. "
|
85 |
-
]
|
86 |
-
},
|
87 |
-
{
|
88 |
-
"cell_type": "code",
|
89 |
-
"execution_count": 8,
|
90 |
-
"metadata": {},
|
91 |
-
"outputs": [
|
92 |
-
{
|
93 |
-
"data": {
|
94 |
-
"text/plain": [
|
95 |
-
"ToolOutput(content=' As an anthropologist, I am intrigued by the potential implications of AI on the future of work and society. My research focuses on the cultural and social aspects of technological advancements, and I believe it is essential to understand how AI will shape the lives of Cypriot people and the broader society. I am particularly interested in exploring how AI will impact traditional industries, such as agriculture and tourism, and how it will affect the skills and knowledge required for future employment. As someone who has spent extensive time in Cyprus, I am well-positioned to investigate the unique cultural and historical context of the island and how it will influence the adoption and impact of AI. My research will not only provide valuable insights into the future of work but also contribute to the development of policies and strategies that support the well-being of Cypriot citizens and the broader society. \\n\\nAs an environmental historian or urban planner, I am more focused on the ecological and sustainability aspects of AI, particularly in the context of urban planning and conservation. I believe that AI has the potential to significantly impact the built environment and the natural world, and I am eager to explore how it can be used to create more sustainable and resilient cities. My research will focus on the intersection of AI, urban planning, and environmental conservation, and I', tool_name='some useful name', raw_input={'input': 'Responds about research on the impact of AI on the future of work and society?'}, raw_output=Response(response=' As an anthropologist, I am intrigued by the potential implications of AI on the future of work and society. My research focuses on the cultural and social aspects of technological advancements, and I believe it is essential to understand how AI will shape the lives of Cypriot people and the broader society. I am particularly interested in exploring how AI will impact traditional industries, such as agriculture and tourism, and how it will affect the skills and knowledge required for future employment. As someone who has spent extensive time in Cyprus, I am well-positioned to investigate the unique cultural and historical context of the island and how it will influence the adoption and impact of AI. My research will not only provide valuable insights into the future of work but also contribute to the development of policies and strategies that support the well-being of Cypriot citizens and the broader society. \\n\\nAs an environmental historian or urban planner, I am more focused on the ecological and sustainability aspects of AI, particularly in the context of urban planning and conservation. I believe that AI has the potential to significantly impact the built environment and the natural world, and I am eager to explore how it can be used to create more sustainable and resilient cities. My research will focus on the intersection of AI, urban planning, and environmental conservation, and I', source_nodes=[NodeWithScore(node=TextNode(id_='f0ea24d2-4ed3-4575-a41f-740a3fa8b521', embedding=None, metadata={'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1.txt', 'file_name': 'persona_1.txt', 'file_type': 'text/plain', 'file_size': 266, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='d5db5bf4-daac-41e5-b5aa-271e8305da25', node_type='4', metadata={'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1.txt', 'file_name': 'persona_1.txt', 'file_type': 'text/plain', 'file_size': 266, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, hash='e6c87149a97bf9e5dbdf33922a4e5023c6b72550ca0b63472bd5d25103b28e99')}, metadata_template='{key}: {value}', metadata_separator='\\n', text='An anthropologist or a cultural expert interested in the intricacies of Cypriot culture, history, and society, particularly someone who has spent considerable time researching and living in Cyprus to gain a deep understanding of its people, customs, and way of life.', mimetype='text/plain', start_char_idx=0, end_char_idx=266, metadata_seperator='\\n', text_template='{metadata_str}\\n\\n{content}'), score=0.3761845613489774), NodeWithScore(node=TextNode(id_='cebcd676-3180-4cda-be99-d535babc1b96', embedding=None, metadata={'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1004.txt', 'file_name': 'persona_1004.txt', 'file_type': 'text/plain', 'file_size': 160, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='1347651d-7fc8-42d4-865c-a0151a534a1b', node_type='4', metadata={'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1004.txt', 'file_name': 'persona_1004.txt', 'file_type': 'text/plain', 'file_size': 160, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, hash='19628b0ae4a0f0ebd63b75e13df7d9183f42e8bb84358fdc2c9049c016c4b67d')}, metadata_template='{key}: {value}', metadata_separator='\\n', text='An environmental historian or urban planner focused on ecological conservation and sustainability, likely working in local government or a related organization.', mimetype='text/plain', start_char_idx=0, end_char_idx=160, metadata_seperator='\\n', text_template='{metadata_str}\\n\\n{content}'), score=0.3733060058493167)], metadata={'f0ea24d2-4ed3-4575-a41f-740a3fa8b521': {'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1.txt', 'file_name': 'persona_1.txt', 'file_type': 'text/plain', 'file_size': 266, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, 'cebcd676-3180-4cda-be99-d535babc1b96': {'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1004.txt', 'file_name': 'persona_1004.txt', 'file_type': 'text/plain', 'file_size': 160, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}}), is_error=False)"
|
96 |
-
]
|
97 |
-
},
|
98 |
-
"execution_count": 8,
|
99 |
-
"metadata": {},
|
100 |
-
"output_type": "execute_result"
|
101 |
-
}
|
102 |
-
],
|
103 |
-
"source": [
|
104 |
-
"import chromadb\n",
|
105 |
-
"\n",
|
106 |
-
"from llama_index.core import VectorStoreIndex\n",
|
107 |
-
"from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n",
|
108 |
-
"from llama_index.embeddings.huggingface import HuggingFaceEmbedding\n",
|
109 |
-
"from llama_index.core.tools import QueryEngineTool\n",
|
110 |
-
"from llama_index.vector_stores.chroma import ChromaVectorStore\n",
|
111 |
-
"\n",
|
112 |
-
"db = chromadb.PersistentClient(path=\"./alfred_chroma_db\")\n",
|
113 |
-
"chroma_collection = db.get_or_create_collection(\"alfred\")\n",
|
114 |
-
"vector_store = ChromaVectorStore(chroma_collection=chroma_collection)\n",
|
115 |
-
"embed_model = HuggingFaceEmbedding(model_name=\"BAAI/bge-small-en-v1.5\")\n",
|
116 |
-
"llm = HuggingFaceInferenceAPI(model_name=\"meta-llama/Llama-3.2-3B-Instruct\")\n",
|
117 |
-
"index = VectorStoreIndex.from_vector_store(\n",
|
118 |
-
" vector_store=vector_store, embed_model=embed_model\n",
|
119 |
-
")\n",
|
120 |
-
"query_engine = index.as_query_engine(llm=llm)\n",
|
121 |
-
"tool = QueryEngineTool.from_defaults(\n",
|
122 |
-
" query_engine=query_engine,\n",
|
123 |
-
" name=\"some useful name\",\n",
|
124 |
-
" description=\"some useful description\",\n",
|
125 |
-
")\n",
|
126 |
-
"await tool.acall(\n",
|
127 |
-
" \"Responds about research on the impact of AI on the future of work and society?\"\n",
|
128 |
-
")"
|
129 |
-
]
|
130 |
-
},
|
131 |
-
{
|
132 |
-
"cell_type": "markdown",
|
133 |
-
"metadata": {},
|
134 |
-
"source": [
|
135 |
-
"## Créer un *Toolspecs*\n",
|
136 |
-
"\n",
|
137 |
-
"Créons un `ToolSpec` à partir du `GmailToolSpec` du LlamaHub et convertissons-le en une liste d'outils."
|
138 |
-
]
|
139 |
-
},
|
140 |
-
{
|
141 |
-
"cell_type": "code",
|
142 |
-
"execution_count": 1,
|
143 |
-
"metadata": {},
|
144 |
-
"outputs": [
|
145 |
-
{
|
146 |
-
"data": {
|
147 |
-
"text/plain": [
|
148 |
-
"[<llama_index.core.tools.function_tool.FunctionTool at 0x7f0d50623d90>,\n",
|
149 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c055210>,\n",
|
150 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c055780>,\n",
|
151 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c0556f0>,\n",
|
152 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c0559f0>,\n",
|
153 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c055b40>]"
|
154 |
-
]
|
155 |
-
},
|
156 |
-
"execution_count": 1,
|
157 |
-
"metadata": {},
|
158 |
-
"output_type": "execute_result"
|
159 |
-
}
|
160 |
-
],
|
161 |
-
"source": [
|
162 |
-
"from llama_index.tools.google import GmailToolSpec\n",
|
163 |
-
"\n",
|
164 |
-
"tool_spec = GmailToolSpec()\n",
|
165 |
-
"tool_spec_list = tool_spec.to_tool_list()\n",
|
166 |
-
"tool_spec_list"
|
167 |
-
]
|
168 |
-
},
|
169 |
-
{
|
170 |
-
"cell_type": "markdown",
|
171 |
-
"metadata": {},
|
172 |
-
"source": [
|
173 |
-
"Pour obtenir une vue plus détaillée des outils, nous pouvons jeter un coup d'œil aux `métadonnées` de chaque outil."
|
174 |
-
]
|
175 |
-
},
|
176 |
-
{
|
177 |
-
"cell_type": "code",
|
178 |
-
"execution_count": 2,
|
179 |
-
"metadata": {},
|
180 |
-
"outputs": [
|
181 |
-
{
|
182 |
-
"name": "stdout",
|
183 |
-
"output_type": "stream",
|
184 |
-
"text": [
|
185 |
-
"load_data load_data() -> List[llama_index.core.schema.Document]\n",
|
186 |
-
"Load emails from the user's account.\n",
|
187 |
-
"search_messages search_messages(query: str, max_results: Optional[int] = None)\n",
|
188 |
-
"Searches email messages given a query string and the maximum number\n",
|
189 |
-
" of results requested by the user\n",
|
190 |
-
" Returns: List of relevant message objects up to the maximum number of results.\n",
|
191 |
-
"\n",
|
192 |
-
" Args:\n",
|
193 |
-
" query[str]: The user's query\n",
|
194 |
-
" max_results (Optional[int]): The maximum number of search results\n",
|
195 |
-
" to return.\n",
|
196 |
-
" \n",
|
197 |
-
"create_draft create_draft(to: Optional[List[str]] = None, subject: Optional[str] = None, message: Optional[str] = None) -> str\n",
|
198 |
-
"Create and insert a draft email.\n",
|
199 |
-
" Print the returned draft's message and id.\n",
|
200 |
-
" Returns: Draft object, including draft id and message meta data.\n",
|
201 |
-
"\n",
|
202 |
-
" Args:\n",
|
203 |
-
" to (Optional[str]): The email addresses to send the message to\n",
|
204 |
-
" subject (Optional[str]): The subject for the event\n",
|
205 |
-
" message (Optional[str]): The message for the event\n",
|
206 |
-
" \n",
|
207 |
-
"update_draft update_draft(to: Optional[List[str]] = None, subject: Optional[str] = None, message: Optional[str] = None, draft_id: str = None) -> str\n",
|
208 |
-
"Update a draft email.\n",
|
209 |
-
" Print the returned draft's message and id.\n",
|
210 |
-
" This function is required to be passed a draft_id that is obtained when creating messages\n",
|
211 |
-
" Returns: Draft object, including draft id and message meta data.\n",
|
212 |
-
"\n",
|
213 |
-
" Args:\n",
|
214 |
-
" to (Optional[str]): The email addresses to send the message to\n",
|
215 |
-
" subject (Optional[str]): The subject for the event\n",
|
216 |
-
" message (Optional[str]): The message for the event\n",
|
217 |
-
" draft_id (str): the id of the draft to be updated\n",
|
218 |
-
" \n",
|
219 |
-
"get_draft get_draft(draft_id: str = None) -> str\n",
|
220 |
-
"Get a draft email.\n",
|
221 |
-
" Print the returned draft's message and id.\n",
|
222 |
-
" Returns: Draft object, including draft id and message meta data.\n",
|
223 |
-
"\n",
|
224 |
-
" Args:\n",
|
225 |
-
" draft_id (str): the id of the draft to be updated\n",
|
226 |
-
" \n",
|
227 |
-
"send_draft send_draft(draft_id: str = None) -> str\n",
|
228 |
-
"Sends a draft email.\n",
|
229 |
-
" Print the returned draft's message and id.\n",
|
230 |
-
" Returns: Draft object, including draft id and message meta data.\n",
|
231 |
-
"\n",
|
232 |
-
" Args:\n",
|
233 |
-
" draft_id (str): the id of the draft to be updated\n",
|
234 |
-
" \n"
|
235 |
-
]
|
236 |
-
},
|
237 |
-
{
|
238 |
-
"data": {
|
239 |
-
"text/plain": [
|
240 |
-
"[None, None, None, None, None, None]"
|
241 |
-
]
|
242 |
-
},
|
243 |
-
"execution_count": 2,
|
244 |
-
"metadata": {},
|
245 |
-
"output_type": "execute_result"
|
246 |
-
}
|
247 |
-
],
|
248 |
-
"source": [
|
249 |
-
"[print(tool.metadata.name, tool.metadata.description) for tool in tool_spec_list]"
|
250 |
-
]
|
251 |
-
}
|
252 |
-
],
|
253 |
-
"metadata": {
|
254 |
-
"kernelspec": {
|
255 |
-
"display_name": "Python 3 (ipykernel)",
|
256 |
-
"language": "python",
|
257 |
-
"name": "python3"
|
258 |
-
},
|
259 |
-
"language_info": {
|
260 |
-
"codemirror_mode": {
|
261 |
-
"name": "ipython",
|
262 |
-
"version": 3
|
263 |
-
},
|
264 |
-
"file_extension": ".py",
|
265 |
-
"mimetype": "text/x-python",
|
266 |
-
"name": "python",
|
267 |
-
"nbconvert_exporter": "python",
|
268 |
-
"pygments_lexer": "ipython3",
|
269 |
-
"version": "3.12.7"
|
270 |
-
}
|
271 |
-
},
|
272 |
-
"nbformat": 4,
|
273 |
-
"nbformat_minor": 4
|
274 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
fr/unit2/llama-index/.ipynb_checkpoints/workflows-checkpoint.ipynb
DELETED
@@ -1,402 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {},
|
6 |
-
"source": [
|
7 |
-
"# *Workflows* dans LlamaIndex\n",
|
8 |
-
"\n",
|
9 |
-
"\n",
|
10 |
-
"Ce notebook fait parti du cours <a href=\"https://huggingface.co/learn/agents-course/fr\">sur les agents d'Hugging Face</a>, un cours gratuit qui vous guidera, du **niveau débutant à expert**, pour comprendre, utiliser et construire des agents.\n",
|
11 |
-
"\n",
|
12 |
-
"\n",
|
13 |
-
"## Installons les dépendances\n",
|
14 |
-
"\n",
|
15 |
-
"Nous allons installer les dépendances pour cette unité."
|
16 |
-
]
|
17 |
-
},
|
18 |
-
{
|
19 |
-
"cell_type": "code",
|
20 |
-
"execution_count": null,
|
21 |
-
"metadata": {},
|
22 |
-
"outputs": [],
|
23 |
-
"source": [
|
24 |
-
"!pip install llama-index llama-index-vector-stores-chroma llama-index-utils-workflow llama-index-llms-huggingface-api pyvis -U -q"
|
25 |
-
]
|
26 |
-
},
|
27 |
-
{
|
28 |
-
"cell_type": "markdown",
|
29 |
-
"metadata": {},
|
30 |
-
"source": [
|
31 |
-
"Nous allons également nous connecter au Hugging Face Hub pour avoir accès à l'API d'inférence."
|
32 |
-
]
|
33 |
-
},
|
34 |
-
{
|
35 |
-
"cell_type": "code",
|
36 |
-
"execution_count": null,
|
37 |
-
"metadata": {},
|
38 |
-
"outputs": [],
|
39 |
-
"source": [
|
40 |
-
"from huggingface_hub import login\n",
|
41 |
-
"\n",
|
42 |
-
"login()"
|
43 |
-
]
|
44 |
-
},
|
45 |
-
{
|
46 |
-
"cell_type": "markdown",
|
47 |
-
"metadata": {},
|
48 |
-
"source": [
|
49 |
-
"## Création de *Workflow* de base\n",
|
50 |
-
"\n",
|
51 |
-
"Nous pouvons commencer par créer un *workflow* simple. Nous utilisons les classes `StartEvent` et `StopEvent` pour définir le début et la fin de celui-ci."
|
52 |
-
]
|
53 |
-
},
|
54 |
-
{
|
55 |
-
"cell_type": "code",
|
56 |
-
"execution_count": 3,
|
57 |
-
"metadata": {},
|
58 |
-
"outputs": [
|
59 |
-
{
|
60 |
-
"data": {
|
61 |
-
"text/plain": [
|
62 |
-
"'Hello, world!'"
|
63 |
-
]
|
64 |
-
},
|
65 |
-
"execution_count": 3,
|
66 |
-
"metadata": {},
|
67 |
-
"output_type": "execute_result"
|
68 |
-
}
|
69 |
-
],
|
70 |
-
"source": [
|
71 |
-
"from llama_index.core.workflow import StartEvent, StopEvent, Workflow, step\n",
|
72 |
-
"\n",
|
73 |
-
"\n",
|
74 |
-
"class MyWorkflow(Workflow):\n",
|
75 |
-
" @step\n",
|
76 |
-
" async def my_step(self, ev: StartEvent) -> StopEvent:\n",
|
77 |
-
" # faire quelque chose ici\n",
|
78 |
-
" return StopEvent(result=\"Hello, world!\")\n",
|
79 |
-
"\n",
|
80 |
-
"\n",
|
81 |
-
"w = MyWorkflow(timeout=10, verbose=False)\n",
|
82 |
-
"result = await w.run()\n",
|
83 |
-
"result"
|
84 |
-
]
|
85 |
-
},
|
86 |
-
{
|
87 |
-
"cell_type": "markdown",
|
88 |
-
"metadata": {},
|
89 |
-
"source": [
|
90 |
-
"## Connecter plusieurs étapes\n",
|
91 |
-
"\n",
|
92 |
-
"Nous pouvons également créer des *workflows* à plusieurs étapes. Ici, nous transmettons les informations relatives à l'événement entre les étapes. Notez que nous pouvons utiliser l'indication de type pour spécifier le type d'événement et le flux du *workflow*."
|
93 |
-
]
|
94 |
-
},
|
95 |
-
{
|
96 |
-
"cell_type": "code",
|
97 |
-
"execution_count": 4,
|
98 |
-
"metadata": {},
|
99 |
-
"outputs": [
|
100 |
-
{
|
101 |
-
"data": {
|
102 |
-
"text/plain": [
|
103 |
-
"'Finished processing: Step 1 complete'"
|
104 |
-
]
|
105 |
-
},
|
106 |
-
"execution_count": 4,
|
107 |
-
"metadata": {},
|
108 |
-
"output_type": "execute_result"
|
109 |
-
}
|
110 |
-
],
|
111 |
-
"source": [
|
112 |
-
"from llama_index.core.workflow import Event\n",
|
113 |
-
"\n",
|
114 |
-
"\n",
|
115 |
-
"class ProcessingEvent(Event):\n",
|
116 |
-
" intermediate_result: str\n",
|
117 |
-
"\n",
|
118 |
-
"\n",
|
119 |
-
"class MultiStepWorkflow(Workflow):\n",
|
120 |
-
" @step\n",
|
121 |
-
" async def step_one(self, ev: StartEvent) -> ProcessingEvent:\n",
|
122 |
-
" # Traitement des données initiales\n",
|
123 |
-
" return ProcessingEvent(intermediate_result=\"Step 1 complete\")\n",
|
124 |
-
"\n",
|
125 |
-
" @step\n",
|
126 |
-
" async def step_two(self, ev: ProcessingEvent) -> StopEvent:\n",
|
127 |
-
" # Utiliser le résultat intermédiaire\n",
|
128 |
-
" final_result = f\"Finished processing: {ev.intermediate_result}\"\n",
|
129 |
-
" return StopEvent(result=final_result)\n",
|
130 |
-
"\n",
|
131 |
-
"\n",
|
132 |
-
"w = MultiStepWorkflow(timeout=10, verbose=False)\n",
|
133 |
-
"result = await w.run()\n",
|
134 |
-
"result"
|
135 |
-
]
|
136 |
-
},
|
137 |
-
{
|
138 |
-
"cell_type": "markdown",
|
139 |
-
"metadata": {},
|
140 |
-
"source": [
|
141 |
-
"## Boucles et branches\n",
|
142 |
-
"\n",
|
143 |
-
"Nous pouvons également utiliser l'indication de type pour créer des branches et des boucles. Notez que nous pouvons utiliser l'opérateur `|` pour spécifier que l'étape peut renvoyer plusieurs types."
|
144 |
-
]
|
145 |
-
},
|
146 |
-
{
|
147 |
-
"cell_type": "code",
|
148 |
-
"execution_count": 28,
|
149 |
-
"metadata": {},
|
150 |
-
"outputs": [
|
151 |
-
{
|
152 |
-
"name": "stdout",
|
153 |
-
"output_type": "stream",
|
154 |
-
"text": [
|
155 |
-
"Bad thing happened\n",
|
156 |
-
"Bad thing happened\n",
|
157 |
-
"Bad thing happened\n",
|
158 |
-
"Good thing happened\n"
|
159 |
-
]
|
160 |
-
},
|
161 |
-
{
|
162 |
-
"data": {
|
163 |
-
"text/plain": [
|
164 |
-
"'Finished processing: First step complete.'"
|
165 |
-
]
|
166 |
-
},
|
167 |
-
"execution_count": 28,
|
168 |
-
"metadata": {},
|
169 |
-
"output_type": "execute_result"
|
170 |
-
}
|
171 |
-
],
|
172 |
-
"source": [
|
173 |
-
"from llama_index.core.workflow import Event\n",
|
174 |
-
"import random\n",
|
175 |
-
"\n",
|
176 |
-
"\n",
|
177 |
-
"class ProcessingEvent(Event):\n",
|
178 |
-
" intermediate_result: str\n",
|
179 |
-
"\n",
|
180 |
-
"\n",
|
181 |
-
"class LoopEvent(Event):\n",
|
182 |
-
" loop_output: str\n",
|
183 |
-
"\n",
|
184 |
-
"\n",
|
185 |
-
"class MultiStepWorkflow(Workflow):\n",
|
186 |
-
" @step\n",
|
187 |
-
" async def step_one(self, ev: StartEvent | LoopEvent) -> ProcessingEvent | LoopEvent:\n",
|
188 |
-
" if random.randint(0, 1) == 0:\n",
|
189 |
-
" print(\"Bad thing happened\")\n",
|
190 |
-
" return LoopEvent(loop_output=\"Back to step one.\")\n",
|
191 |
-
" else:\n",
|
192 |
-
" print(\"Good thing happened\")\n",
|
193 |
-
" return ProcessingEvent(intermediate_result=\"First step complete.\")\n",
|
194 |
-
"\n",
|
195 |
-
" @step\n",
|
196 |
-
" async def step_two(self, ev: ProcessingEvent) -> StopEvent:\n",
|
197 |
-
" # Utiliser le résultat intermédiaire\n",
|
198 |
-
" final_result = f\"Finished processing: {ev.intermediate_result}\"\n",
|
199 |
-
" return StopEvent(result=final_result)\n",
|
200 |
-
"\n",
|
201 |
-
"\n",
|
202 |
-
"w = MultiStepWorkflow(verbose=False)\n",
|
203 |
-
"result = await w.run()\n",
|
204 |
-
"result"
|
205 |
-
]
|
206 |
-
},
|
207 |
-
{
|
208 |
-
"cell_type": "markdown",
|
209 |
-
"metadata": {},
|
210 |
-
"source": [
|
211 |
-
"## Dessiner des *Workflows*\n",
|
212 |
-
"\n",
|
213 |
-
"Nous pouvons également dessiner des *workflows* avec la fonction `draw_all_possible_flows`."
|
214 |
-
]
|
215 |
-
},
|
216 |
-
{
|
217 |
-
"cell_type": "code",
|
218 |
-
"execution_count": 24,
|
219 |
-
"metadata": {},
|
220 |
-
"outputs": [
|
221 |
-
{
|
222 |
-
"name": "stdout",
|
223 |
-
"output_type": "stream",
|
224 |
-
"text": [
|
225 |
-
"<class 'NoneType'>\n",
|
226 |
-
"<class '__main__.ProcessingEvent'>\n",
|
227 |
-
"<class '__main__.LoopEvent'>\n",
|
228 |
-
"<class 'llama_index.core.workflow.events.StopEvent'>\n",
|
229 |
-
"workflow_all_flows.html\n"
|
230 |
-
]
|
231 |
-
}
|
232 |
-
],
|
233 |
-
"source": [
|
234 |
-
"from llama_index.utils.workflow import draw_all_possible_flows\n",
|
235 |
-
"\n",
|
236 |
-
"draw_all_possible_flows(w)"
|
237 |
-
]
|
238 |
-
},
|
239 |
-
{
|
240 |
-
"cell_type": "markdown",
|
241 |
-
"metadata": {},
|
242 |
-
"source": [
|
243 |
-
""
|
244 |
-
]
|
245 |
-
},
|
246 |
-
{
|
247 |
-
"cell_type": "markdown",
|
248 |
-
"metadata": {},
|
249 |
-
"source": [
|
250 |
-
"### Gestion d'état\n",
|
251 |
-
"\n",
|
252 |
-
"Au lieu de passer l'information de l'événement entre les étapes, nous pouvons utiliser l'indice de type `Context` pour passer l'information entre les étapes. \n",
|
253 |
-
"Cela peut être utile pour les *workflows* de plus longue durée, où l'on souhaite stocker des informations entre les étapes."
|
254 |
-
]
|
255 |
-
},
|
256 |
-
{
|
257 |
-
"cell_type": "code",
|
258 |
-
"execution_count": 25,
|
259 |
-
"metadata": {},
|
260 |
-
"outputs": [
|
261 |
-
{
|
262 |
-
"name": "stdout",
|
263 |
-
"output_type": "stream",
|
264 |
-
"text": [
|
265 |
-
"Query: What is the capital of France?\n"
|
266 |
-
]
|
267 |
-
},
|
268 |
-
{
|
269 |
-
"data": {
|
270 |
-
"text/plain": [
|
271 |
-
"'Finished processing: Step 1 complete'"
|
272 |
-
]
|
273 |
-
},
|
274 |
-
"execution_count": 25,
|
275 |
-
"metadata": {},
|
276 |
-
"output_type": "execute_result"
|
277 |
-
}
|
278 |
-
],
|
279 |
-
"source": [
|
280 |
-
"from llama_index.core.workflow import Event, Context\n",
|
281 |
-
"from llama_index.core.agent.workflow import ReActAgent\n",
|
282 |
-
"\n",
|
283 |
-
"\n",
|
284 |
-
"class ProcessingEvent(Event):\n",
|
285 |
-
" intermediate_result: str\n",
|
286 |
-
"\n",
|
287 |
-
"\n",
|
288 |
-
"class MultiStepWorkflow(Workflow):\n",
|
289 |
-
" @step\n",
|
290 |
-
" async def step_one(self, ev: StartEvent, ctx: Context) -> ProcessingEvent:\n",
|
291 |
-
" # Traitement des données initiales\n",
|
292 |
-
" await ctx.store.set(\"query\", \"What is the capital of France?\")\n",
|
293 |
-
" return ProcessingEvent(intermediate_result=\"Step 1 complete\")\n",
|
294 |
-
"\n",
|
295 |
-
" @step\n",
|
296 |
-
" async def step_two(self, ev: ProcessingEvent, ctx: Context) -> StopEvent:\n",
|
297 |
-
" # Utiliser le résultat intermédiaire\n",
|
298 |
-
" query = await ctx.store.get(\"query\")\n",
|
299 |
-
" print(f\"Query: {query}\")\n",
|
300 |
-
" final_result = f\"Finished processing: {ev.intermediate_result}\"\n",
|
301 |
-
" return StopEvent(result=final_result)\n",
|
302 |
-
"\n",
|
303 |
-
"\n",
|
304 |
-
"w = MultiStepWorkflow(timeout=10, verbose=False)\n",
|
305 |
-
"result = await w.run()\n",
|
306 |
-
"result"
|
307 |
-
]
|
308 |
-
},
|
309 |
-
{
|
310 |
-
"cell_type": "markdown",
|
311 |
-
"metadata": {},
|
312 |
-
"source": [
|
313 |
-
"## *Multi-Agent Workflows*\n",
|
314 |
-
"\n",
|
315 |
-
"Nous pouvons également créer des flux de travail multi-agents. Ici, nous définissons deux agents, l'un qui multiplie deux entiers et l'autre qui ajoute deux entiers."
|
316 |
-
]
|
317 |
-
},
|
318 |
-
{
|
319 |
-
"cell_type": "code",
|
320 |
-
"execution_count": null,
|
321 |
-
"metadata": {},
|
322 |
-
"outputs": [
|
323 |
-
{
|
324 |
-
"data": {
|
325 |
-
"text/plain": [
|
326 |
-
"AgentOutput(response=ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='5 and 3 add up to 8.')]), tool_calls=[ToolCallResult(tool_name='handoff', tool_kwargs={'to_agent': 'add_agent', 'reason': 'The user wants to add two numbers, and the add_agent is better suited for this task.'}, tool_id='831895e7-3502-4642-92ea-8626e21ed83b', tool_output=ToolOutput(content='Agent add_agent is now handling the request due to the following reason: The user wants to add two numbers, and the add_agent is better suited for this task..\n",
|
327 |
-
"Please continue with the current request.', tool_name='handoff', raw_input={'args': (), 'kwargs': {'to_agent': 'add_agent', 'reason': 'The user wants to add two numbers, and the add_agent is better suited for this task.'}}, raw_output='Agent add_agent is now handling the request due to the following reason: The user wants to add two numbers, and the add_agent is better suited for this task..\n",
|
328 |
-
"Please continue with the current request.', is_error=False), return_direct=True), ToolCallResult(tool_name='add', tool_kwargs={'a': 5, 'b': 3}, tool_id='c29dc3f7-eaa7-4ba7-b49b-90908f860cc5', tool_output=ToolOutput(content='8', tool_name='add', raw_input={'args': (), 'kwargs': {'a': 5, 'b': 3}}, raw_output=8, is_error=False), return_direct=False)], raw=ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(role='assistant', content='.', tool_call_id=None, tool_calls=None), index=0, finish_reason=None, logprobs=None)], created=1744553546, id='', model='Qwen/Qwen2.5-Coder-32B-Instruct', system_fingerprint='3.2.1-sha-4d28897', usage=None, object='chat.completion.chunk'), current_agent_name='add_agent')"
|
329 |
-
]
|
330 |
-
},
|
331 |
-
"execution_count": 33,
|
332 |
-
"metadata": {},
|
333 |
-
"output_type": "execute_result"
|
334 |
-
}
|
335 |
-
],
|
336 |
-
"source": [
|
337 |
-
"from llama_index.core.agent.workflow import AgentWorkflow, ReActAgent\n",
|
338 |
-
"from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n",
|
339 |
-
"from llama_index.core.agent.workflow import AgentWorkflow\n",
|
340 |
-
"\n",
|
341 |
-
"# Définir quelques outils\n",
|
342 |
-
"def add(a: int, b: int) -> int:\n",
|
343 |
-
" \"\"\"Add two numbers.\"\"\"\n",
|
344 |
-
" return a + b\n",
|
345 |
-
"\n",
|
346 |
-
"def multiply(a: int, b: int) -> int:\n",
|
347 |
-
" \"\"\"Multiply two numbers.\"\"\"\n",
|
348 |
-
" return a * b\n",
|
349 |
-
"\n",
|
350 |
-
"llm = HuggingFaceInferenceAPI(model_name=\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n",
|
351 |
-
"\n",
|
352 |
-
"# nous pouvons passer des fonctions directement sans FunctionTool -- les fn/docstring sont analysés pour le nom/description\n",
|
353 |
-
"multiply_agent = ReActAgent(\n",
|
354 |
-
" name=\"multiply_agent\",\n",
|
355 |
-
" description=\"Is able to multiply two integers\",\n",
|
356 |
-
" system_prompt=\"A helpful assistant that can use a tool to multiply numbers.\",\n",
|
357 |
-
" tools=[multiply], \n",
|
358 |
-
" llm=llm,\n",
|
359 |
-
")\n",
|
360 |
-
"\n",
|
361 |
-
"addition_agent = ReActAgent(\n",
|
362 |
-
" name=\"add_agent\",\n",
|
363 |
-
" description=\"Is able to add two integers\",\n",
|
364 |
-
" system_prompt=\"A helpful assistant that can use a tool to add numbers.\",\n",
|
365 |
-
" tools=[add], \n",
|
366 |
-
" llm=llm,\n",
|
367 |
-
")\n",
|
368 |
-
"\n",
|
369 |
-
"# Créer le workflow\n",
|
370 |
-
"workflow = AgentWorkflow(\n",
|
371 |
-
" agents=[multiply_agent, addition_agent],\n",
|
372 |
-
" root_agent=\"multiply_agent\"\n",
|
373 |
-
")\n",
|
374 |
-
"\n",
|
375 |
-
"# Exécuter le système\n",
|
376 |
-
"response = await workflow.run(user_msg=\"Can you add 5 and 3?\")\n",
|
377 |
-
"response"
|
378 |
-
]
|
379 |
-
}
|
380 |
-
],
|
381 |
-
"metadata": {
|
382 |
-
"kernelspec": {
|
383 |
-
"display_name": "Python 3 (ipykernel)",
|
384 |
-
"language": "python",
|
385 |
-
"name": "python3"
|
386 |
-
},
|
387 |
-
"language_info": {
|
388 |
-
"codemirror_mode": {
|
389 |
-
"name": "ipython",
|
390 |
-
"version": 3
|
391 |
-
},
|
392 |
-
"file_extension": ".py",
|
393 |
-
"mimetype": "text/x-python",
|
394 |
-
"name": "python",
|
395 |
-
"nbconvert_exporter": "python",
|
396 |
-
"pygments_lexer": "ipython3",
|
397 |
-
"version": "3.12.7"
|
398 |
-
}
|
399 |
-
},
|
400 |
-
"nbformat": 4,
|
401 |
-
"nbformat_minor": 4
|
402 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|