LangChain Routing 学习笔记
- 0. 引言
- 1. 使用提示词
- 2. 使用 RunnableLambda
0. 引言
在使用大语言模型开发应用时,其中一个场景就是根据不同的输入,调用(或者说路由到)不同的逻辑。这就好比我们以前开发时经常使用的if ... else ...
一样。
实现路由有多种方法,下面介绍2种简单的方法。
1. 使用提示词
这种方法简单来说就是使用提示词,让大语言模型根据输入给出特定的输出。
示例代码,
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
from langchain_openai import ChatOpenAI
# from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
chain = (
PromptTemplate.from_template(
"""Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.
Do not respond with more than one word.
<question>
{question}
</question>
Classification:"""
)
# | ChatAnthropic(model_name="claude-3-haiku-20240307")
| ChatOpenAI(model="gpt-4", temperature=0)
| StrOutputParser()
)
chain.invoke({"question": "how do I call Anthropic?"})
输出结果,
Anthropic
2. 使用 RunnableLambda
示例代码,
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
from langchain_openai import ChatOpenAI
# from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
chain = (
PromptTemplate.from_template(
"""Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.
Do not respond with more than one word.
<question>
{question}
</question>
Classification:"""
)
# | ChatAnthropic(model_name="claude-3-haiku-20240307")
| ChatOpenAI(model="gpt-4", temperature=0)
| StrOutputParser()
)
from langchain_core.prompts import PromptTemplate
# from langchain_anthropic import ChatAnthropic
langchain_chain = PromptTemplate.from_template(
"""You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:
Question: {question}
Answer:"""
# ) | ChatAnthropic(model_name="claude-3-haiku-20240307")
) | ChatOpenAI(model="gpt-4", temperature=0)
anthropic_chain = PromptTemplate.from_template(
"""You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:
Question: {question}
Answer:"""
# ) | ChatAnthropic(model_name="claude-3-haiku-20240307")
) | ChatOpenAI(model="gpt-4", temperature=0)
general_chain = PromptTemplate.from_template(
"""Respond to the following question:
Question: {question}
Answer:"""
# ) | ChatAnthropic(model_name="claude-3-haiku-20240307")
) | ChatOpenAI(model="gpt-4", temperature=0)
def route(info):
if "anthropic" in info["topic"].lower():
return anthropic_chain
elif "langchain" in info["topic"].lower():
return langchain_chain
else:
return general_chain
from langchain_core.runnables import RunnableLambda
full_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(
route
)
full_chain.invoke({"question": "how do I use Anthropic?"})
输出,
AIMessage(content="As Dario Amodei, the co-founder of Anthropic, explained, using Anthropic's language models typically involves accessing their APIs to generate text or analyze inputs. While exact steps depend on the specific application and whether you're working with a public or private API, generally, you would:\n\n1. **Sign up for access**: Visit Anthropic's website and sign up for an account if they offer public access or reach out to them for partnership if their services are not publicly available.\n\n2. **Obtain an API key**: Once your account is set up, you'll receive an API key that authorizes your application to interact with their models.\n\n3. **Understand the API documentation**: Familiarize yourself with Anthropic's API documentation which outlines how to structure requests, what parameters are available, and how to interpret responses.\n\n4. **Make API calls**: Using a programming language of your choice (like Python), write code that constructs API requests. This usually involves specifying the prompt you want the model to respond to, the maximum length of the response, and other optional settings.\n\n5. **Process the response**: The API will return a response which is typically in JSON format. Your code should parse this response to extract the generated text or any other data provided.\n\n6. **Integrate into your application**: Depending on your use case, integrate the generated text or insights into your software, whether it's for chatbots, content generation, language translation, or analysis.\n\n7. **Respect usage guidelines and ethical considerations**: Always adhere to Anthropic's terms of service, be mindful of the ethical implications of using AI, and ensure you're handling user data responsibly.\n\nRemember that the specifics might change as Anthropic evolves its services, so always refer to their latest documentation for the most accurate instructions.", response_metadata={'token_usage': {'completion_tokens': 364, 'prompt_tokens': 47, 'total_tokens': 411}, 'model_name': 'gpt-4', 'system_fingerprint': 'fp_ollama', 'finish_reason': 'stop', 'logprobs': None}, id='run-636a9a22-389b-478f-8938-51309df9a3d1-0')
示例代码,
full_chain.invoke({"question": "how do I use LangChain?"})
输出,
AIMessage(content="As Harrison Chase explained, using LangChain involves several steps:\n\n1. **Understand Your Use Case**: First, determine what problem you want to solve or what task you aim to accomplish with LangChain. It's a framework designed to create powerful language models and AI applications.\n\n2. **Choose Components**: LangChain is modular, so you'll select the appropriate components for your use case. This might include LLMs (Large Language Models), vector databases, prompt engineering tools, and more.\n\n3. **Set Up Environment**: You need a development environment that supports the technologies used by LangChain, typically Python with libraries like Langchain, Hugging Face Transformers, or other necessary dependencies.\n\n4. **Integrate APIs**: If you're using external models or services, set up API keys and integrate them into your project.\n\n5. **Design Workflows**: Define how data will flow through the system, from input to processing by language models to output. This might involve creating chains of different components.\n\n6. **Write Code**: Implement your design using LangChain's APIs and modules. Start with simple scripts or move on to more complex applications as you become comfortable.\n\n7. **Test and Iterate**: Use sample inputs to test your setup, analyze the outputs, and refine your implementation based on the results.\n\n8. **Deploy and Monitor**: Once satisfied with the performance, deploy your application to a server or cloud platform. Continuously monitor its performance and make adjustments as needed.\n\nRemember, LangChain is about combining different AI components effectively, so it's crucial to have a clear understanding of each part you're using and how they interact. Always refer to the official documentation for the most up-to-date guidance and examples.", response_metadata={'token_usage': {'completion_tokens': 344, 'prompt_tokens': 44, 'total_tokens': 388}, 'model_name': 'gpt-4', 'system_fingerprint': 'fp_ollama', 'finish_reason': 'stop', 'logprobs': None}, id='run-abe4f2fd-9d7c-4f08-8e48-d32ff173d6e3-0')
示例代码,
full_chain.invoke({"question": "whats 2 + 2"})
输出,
AIMessage(content='4', response_metadata={'token_usage': {'completion_tokens': 2, 'prompt_tokens': 23, 'total_tokens': 25}, 'model_name': 'gpt-4', 'system_fingerprint': 'fp_ollama', 'finish_reason': 'stop', 'logprobs': None}, id='run-3c6d5a95-cac9-4dc8-a600-63180f655196-0')
完结!