File size: 1,684 Bytes
e795d6a 93e757a c6cdc6d 93e757a e795d6a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
language: "code"
license: "mit"
tags:
- machine-learning
- ai
- structured-planning
- llamaindex
model_name: "Structured Planning AI Agent"
model_type: "agent"
library_name: "llama-index"
---
# Implementing a Structured Planning AI Agent with LlamaIndex
1. Set up the environment
- Skip this step if you have already set up the environment
```bash
python -m venv .venv
source .venv/bin/activate
```
2. Setup LlamaIndex
```bash
pip install llama-index
```
3. Create a python file
```bash
touch worker.py
```
Or
```bash
echo. > worker.py
```
4. Open the file in VSCode
```bash
code worker.py
```
5. Add the needed imports
```python
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.core.agent import (
StructuredPlannerAgent,
FunctionCallingAgentWorker,
)
```
6. Define the function
```python
def multiply(a: int, b: int) -> int:
"""Multiply two integers and returns the result integer"""
return a * b
```
7. Define and configure the worker agent
```python
multiply_tool = FunctionTool.from_defaults(fn=multiply)
llm = OpenAI(model="gpt-4o-mini")
worker = FunctionCallingAgentWorker.from_tools([multiply_tool], llm=llm, verbose=True)
worker_agent = StructuredPlannerAgent(worker, [multiply_tool], verbose=True)
```
8. Test the worker agent
```python
worker_agent.chat("Solve the equation x = 123 * (x + 2y + 3)")
```
9. Create .env file & add api key
```python
OPENAI_API_KEY="<your_api_key>"
```
10. Run the agent
```bash
python worker.py
``` |