File size: 2,490 Bytes
9c9a39f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
from CustomLLMMistral import CustomLLMMistral
from tools.robot_information import robot_information
import os

os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = f"InfiniFleetTrace"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = "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"  # Update to your API key

llm = CustomLLMMistral()

#info = robot_information.invoke("test")
#print(info)

tools = [ robot_information ]

system="""

You are designed to solve tasks. Each task requires multiple steps that are represented by a markdown code snippet of a json blob.

The json structure should contain the following keys:

thought -> your thoughts

action -> name of a tool

action_input -> parameters to send to the tool



These are the tools you can use: {tool_names}.



These are the tools descriptions:



{tools}



If you have enough information to answer the query use the tool "Final Answer". Its parameters is the solution.

If there is not enough information, keep trying.



"""

human="""

Add the word "STOP" after each markdown snippet. Example:



```json

{{"thought": "<your thoughts>",

 "action": "<tool name or Final Answer to give a final answer>",

 "action_input": "<tool parameters or the final output"}}

```

STOP



This is my query="{input}". Write only the next step needed to solve it.

Your answer should be based in the previous tools executions, even if you think you know the answer.

Remember to add STOP after each snippet.



These were the previous steps given to solve this query and the information you already gathered:

"""

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        MessagesPlaceholder("chat_history", optional=True),
        ("human", human),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)

from langchain.agents import create_json_chat_agent, AgentExecutor
from langchain.memory import ConversationBufferMemory

agent = create_json_chat_agent(
    tools = tools,
    llm = llm,
    prompt = prompt,
    stop_sequence = ["STOP"],
    template_tool_response = "{observation}"
)

memory = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True)

agent_executor.invoke({"input": "Who are you?"})