如何将 Runnables 转换为工具
在这里,我们将演示如何将 LangChain Runnable
转换为可以被代理、链或聊天模型使用的工具。
依赖项
注意:本指南要求 langchain-core
>= 0.2.13。我们还将使用 OpenAI 进行嵌入,但任何 LangChain 嵌入都应该足够。我们将使用一个简单的 LangGraph 代理进行演示。
%%capture --no-stderr
%pip install -U langchain-core langchain-openai langgraph
LangChain 工具 是代理、链或聊天模型用来与外界交互的接口。有关工具调用、内置工具、自定义工具及更多信息的操作指南,请参见 这里。
LangChain 工具——BaseTool 的实例——是具有额外约束的 可运行对象,使其能够被语言模型有效调用:
- 它们的输入被限制为可序列化,具体为字符串和 Python
dict
对象; - 它们包含指示如何以及何时使用的名称和描述;
- 它们可能包含详细的 args_schema 用于其参数。也就是说,虽然一个工具(作为
Runnable
)可能接受单个dict
输入,但填充dict
所需的特定键和类型信息应在args_schema
中指定。
接受字符串或 dict
输入的可运行对象可以使用 as_tool 方法转换为工具,该方法允许为参数指定名称、描述和额外的模式信息。
基本用法
使用类型为 dict
的输入:
from typing import List
from langchain_core.runnables import RunnableLambda
from typing_extensions import TypedDict
class Args(TypedDict):
a: int
b: List[int]
def f(x: Args) -> str:
return str(x["a"] * max(x["b"]))
runnable = RunnableLambda(f)
as_tool = runnable.as_tool(
name="My tool",
description="使用工具的说明。",
)
print(as_tool.description)
as_tool.args_schema.schema()
使用工具的说明。
{'title': 'My tool',
'type': 'object',
'properties': {'a': {'title': 'A', 'type': 'integer'},
'b': {'title': 'B', 'type': 'array', 'items': {'type': 'integer'}}},
'required': ['a', 'b']}
as_tool.invoke({"a": 3, "b": [1, 2]})
'6'
没有类型信息时,可以通过 arg_types
指定参数类型:
from typing import Any, Dict
def g(x: Dict[str, Any]) -> str:
return str(x["a"] * max(x["b"]))
runnable = RunnableLambda(g)
as_tool = runnable.as_tool(
name="My tool",
description="使用工具的说明。",
arg_types={"a": int, "b": List[int]},
)
或者,可以通过直接传递所需的 args_schema 完全指定模式:
from langchain_core.pydantic_v1 import BaseModel, Field
class GSchema(BaseModel):
"""对整数和整数列表应用函数。"""
a: int = Field(..., description="整数")
b: List[int] = Field(..., description="整数列表")
runnable = RunnableLambda(g)
as_tool = runnable.as_tool(GSchema)
也支持字符串输入:
def f(x: str) -> str:
return x + "a"
def g(x: str) -> str:
return x + "z"
runnable = RunnableLambda(f) | g
as_tool = runnable.as_tool()
as_tool.invoke("b")
'baz'
在代理中
接下来,我们将在一个 代理 应用中将 LangChain Runnables 作为工具进行整合。我们将通过以下内容进行演示:
我们首先实例化一个支持 工具调用 的聊天模型:
- OpenAI
- Anthropic
- Azure
- Cohere
- NVIDIA
- FireworksAI
- Groq
- MistralAI
- TogetherAI
pip install -qU langchain-openai
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
pip install -qU langchain-anthropic
import getpass
import os
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass()
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
pip install -qU langchain-openai
import getpass
import os
os.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass()
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
)
pip install -qU langchain-google-vertexai
import getpass
import os
os.environ["GOOGLE_API_KEY"] = getpass.getpass()
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model="gemini-1.5-flash")
pip install -qU langchain-cohere
import getpass
import os
os.environ["COHERE_API_KEY"] = getpass.getpass()
from langchain_cohere import ChatCohere
llm = ChatCohere(model="command-r-plus")
pip install -qU langchain-nvidia-ai-endpoints
import getpass
import os
os.environ["NVIDIA_API_KEY"] = getpass.getpass()
from langchain import ChatNVIDIA
llm = ChatNVIDIA(model="meta/llama3-70b-instruct")
pip install -qU langchain-fireworks
import getpass
import os
os.environ["FIREWORKS_API_KEY"] = getpass.getpass()
from langchain_fireworks import ChatFireworks
llm = ChatFireworks(model="accounts/fireworks/models/llama-v3p1-70b-instruct")
pip install -qU langchain-groq
import getpass
import os
os.environ["GROQ_API_KEY"] = getpass.getpass()
from langchain_groq import ChatGroq
llm = ChatGroq(model="llama3-8b-8192")
pip install -qU langchain-mistralai
import getpass
import os
os.environ["MISTRAL_API_KEY"] = getpass.getpass()
from langchain_mistralai import ChatMistralAI
llm = ChatMistralAI(model="mistral-large-latest")
pip install -qU langchain-openai
import getpass
import os
os.environ["TOGETHER_API_KEY"] = getpass.getpass()
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://api.together.xyz/v1",
api_key=os.environ["TOGETHER_API_KEY"],
model="mistralai/Mixtral-8x7B-Instruct-v0.1",
)
接下来,按照 RAG 教程,我们首先构建一个检索器:
from langchain_core.documents import Document
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import OpenAIEmbeddings
documents = [
Document(
page_content="Dogs are great companions, known for their loyalty and friendliness.",
),
Document(
page_content="Cats are independent pets that often enjoy their own space.",
),
]
vectorstore = InMemoryVectorStore.from_documents(
documents, embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever(
search_type="similarity",
search_kwargs={"k": 1},
)
接下来,我们使用一个简单的预构建 LangGraph 代理 并为其提供工具:
from langgraph.prebuilt import create_react_agent
tools = [
retriever.as_tool(
name="pet_info_retriever",
description="Get information about pets.",
)
]
agent = create_react_agent(llm, tools)
for chunk in agent.stream({"messages": [("human", "What are dogs known for?")]}):
print(chunk)
print("----")
{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_W8cnfOjwqEn4cFcg19LN9mYD', 'function': {'arguments': '{"__arg1":"dogs"}', 'name': 'pet_info_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 60, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d7f81de9-1fb7-4caf-81ed-16dcdb0b2ab4-0', tool_calls=[{'name': 'pet_info_retriever', 'args': {'__arg1': 'dogs'}, 'id': 'call_W8cnfOjwqEn4cFcg19LN9mYD'}], usage_metadata={'input_tokens': 60, 'output_tokens': 19, 'total_tokens': 79})]}}
----
{'tools': {'messages': [ToolMessage(content="[Document(id='86f835fe-4bbe-4ec6-aeb4-489a8b541707', page_content='Dogs are great companions, known for their loyalty and friendliness.')]", name='pet_info_retriever', tool_call_id='call_W8cnfOjwqEn4cFcg19LN9mYD')]}}
----
{'agent': {'messages': [AIMessage(content='Dogs are known for being great companions, known for their loyalty and friendliness.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 134, 'total_tokens': 152}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9ca5847a-a5eb-44c0-a774-84cc2c5bbc5b-0', usage_metadata={'input_tokens': 134, 'output_tokens': 18, 'total_tokens': 152})]}}
----
请查看上述运行的 LangSmith 跟踪。
进一步,我们可以创建一个简单的 RAG 链,该链接受一个额外的参数——在这里是答案的“风格”。
from operator import itemgetter
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
system_prompt = """
You are an assistant for question-answering tasks.
Use the below context to answer the question. If
you don't know the answer, say you don't know.
Use three sentences maximum and keep the answer
concise.
Answer in the style of {answer_style}.
Question: {question}
Context: {context}
"""
prompt = ChatPromptTemplate.from_messages([("system", system_prompt)])
rag_chain = (
{
"context": itemgetter("question") | retriever,
"question": itemgetter("question"),
"answer_style": itemgetter("answer_style"),
}
| prompt
| llm
| StrOutputParser()
)
请注意,我们的链的输入模式包含所需的参数,因此它可以无须进一步说明地转换为工具:
rag_chain.input_schema.schema()
{'title': 'RunnableParallel<context,question,answer_style>Input',
'type': 'object',
'properties': {'question': {'title': 'Question'},
'answer_style': {'title': 'Answer Style'}}}
rag_tool = rag_chain.as_tool(
name="pet_expert",
description="Get information about pets.",
)
下面我们再次调用代理。请注意,代理在其 tool_calls
中填充所需的参数:
agent = create_react_agent(llm, [rag_tool])
for chunk in agent.stream(
{"messages": [("human", "What would a pirate say dogs are known for?")]}
):
print(chunk)
print("----")
{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_17iLPWvOD23zqwd1QVQ00Y63', 'function': {'arguments': '{"question":"What are dogs known for according to pirates?","answer_style":"quote"}', 'name': 'pet_expert'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 59, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7fef44f3-7bba-4e63-8c51-2ad9c5e65e2e-0', tool_calls=[{'name': 'pet_expert', 'args': {'question': 'What are dogs known for according to pirates?', 'answer_style': 'quote'}, 'id': 'call_17iLPWvOD23zqwd1QVQ00Y63'}], usage_metadata={'input_tokens': 59, 'output_tokens': 28, 'total_tokens': 87})]}}
----
{'tools': {'messages': [ToolMessage(content='"Dogs are known for their loyalty and friendliness, making them great companions for pirates on long sea voyages."', name='pet_expert', tool_call_id='call_17iLPWvOD23zqwd1QVQ00Y63')]}}
----
{'agent': {'messages': [AIMessage(content='According to pirates, dogs are known for their loyalty and friendliness, making them great companions for pirates on long sea voyages.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 119, 'total_tokens': 146}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5a30edc3-7be0-4743-b980-ca2f8cad9b8d-0', usage_metadata={'input_tokens': 119, 'output_tokens': 27, 'total_tokens': 146})]}}
----
请查看上述运行的 LangSmith 跟踪。