LangChain Routing 学习笔记

LangChain Routing 学习笔记

  • 0. 引言
  • 1. 使用提示词
  • 2. 使用 RunnableLambda

0. 引言

在使用大语言模型开发应用时,其中一个场景就是根据不同的输入,调用(或者说路由到)不同的逻辑。这就好比我们以前开发时经常使用的if ... else ... 一样。

实现路由有多种方法,下面介绍2种简单的方法。

1. 使用提示词

这种方法简单来说就是使用提示词,让大语言模型根据输入给出特定的输出。

示例代码,

from dotenv import load_dotenv, find_dotenv

_ = load_dotenv(find_dotenv())
from langchain_openai import ChatOpenAI
# from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate

chain = (
    PromptTemplate.from_template(
        """Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.

Do not respond with more than one word.

<question>
{question}
</question>

Classification:"""
    )
    # | ChatAnthropic(model_name="claude-3-haiku-20240307")
    | ChatOpenAI(model="gpt-4", temperature=0)
    | StrOutputParser()
)

chain.invoke({"question": "how do I call Anthropic?"})

输出结果,

Anthropic

2. 使用 RunnableLambda

示例代码,

from dotenv import load_dotenv, find_dotenv

_ = load_dotenv(find_dotenv())
from langchain_openai import ChatOpenAI
# from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate

chain = (
    PromptTemplate.from_template(
        """Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.

Do not respond with more than one word.

<question>
{question}
</question>

Classification:"""
    )
    # | ChatAnthropic(model_name="claude-3-haiku-20240307")
    | ChatOpenAI(model="gpt-4", temperature=0)
    | StrOutputParser()
)
from langchain_core.prompts import PromptTemplate
# from langchain_anthropic import ChatAnthropic

langchain_chain = PromptTemplate.from_template(
    """You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:

Question: {question}
Answer:"""
# ) | ChatAnthropic(model_name="claude-3-haiku-20240307")
) | ChatOpenAI(model="gpt-4", temperature=0)
anthropic_chain = PromptTemplate.from_template(
    """You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:

Question: {question}
Answer:"""
# ) | ChatAnthropic(model_name="claude-3-haiku-20240307")
) | ChatOpenAI(model="gpt-4", temperature=0)
general_chain = PromptTemplate.from_template(
    """Respond to the following question:

Question: {question}
Answer:"""
# ) | ChatAnthropic(model_name="claude-3-haiku-20240307")
) | ChatOpenAI(model="gpt-4", temperature=0)
def route(info):
    if "anthropic" in info["topic"].lower():
        return anthropic_chain
    elif "langchain" in info["topic"].lower():
        return langchain_chain
    else:
        return general_chain
from langchain_core.runnables import RunnableLambda

full_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(
    route
)
full_chain.invoke({"question": "how do I use Anthropic?"})

输出,

AIMessage(content="As Dario Amodei, the co-founder of Anthropic, explained, using Anthropic's language models typically involves accessing their APIs to generate text or analyze inputs. While exact steps depend on the specific application and whether you're working with a public or private API, generally, you would:\n\n1. **Sign up for access**: Visit Anthropic's website and sign up for an account if they offer public access or reach out to them for partnership if their services are not publicly available.\n\n2. **Obtain an API key**: Once your account is set up, you'll receive an API key that authorizes your application to interact with their models.\n\n3. **Understand the API documentation**: Familiarize yourself with Anthropic's API documentation which outlines how to structure requests, what parameters are available, and how to interpret responses.\n\n4. **Make API calls**: Using a programming language of your choice (like Python), write code that constructs API requests. This usually involves specifying the prompt you want the model to respond to, the maximum length of the response, and other optional settings.\n\n5. **Process the response**: The API will return a response which is typically in JSON format. Your code should parse this response to extract the generated text or any other data provided.\n\n6. **Integrate into your application**: Depending on your use case, integrate the generated text or insights into your software, whether it's for chatbots, content generation, language translation, or analysis.\n\n7. **Respect usage guidelines and ethical considerations**: Always adhere to Anthropic's terms of service, be mindful of the ethical implications of using AI, and ensure you're handling user data responsibly.\n\nRemember that the specifics might change as Anthropic evolves its services, so always refer to their latest documentation for the most accurate instructions.", response_metadata={'token_usage': {'completion_tokens': 364, 'prompt_tokens': 47, 'total_tokens': 411}, 'model_name': 'gpt-4', 'system_fingerprint': 'fp_ollama', 'finish_reason': 'stop', 'logprobs': None}, id='run-636a9a22-389b-478f-8938-51309df9a3d1-0')

示例代码,

full_chain.invoke({"question": "how do I use LangChain?"})

输出,

AIMessage(content="As Harrison Chase explained, using LangChain involves several steps:\n\n1. **Understand Your Use Case**: First, determine what problem you want to solve or what task you aim to accomplish with LangChain. It's a framework designed to create powerful language models and AI applications.\n\n2. **Choose Components**: LangChain is modular, so you'll select the appropriate components for your use case. This might include LLMs (Large Language Models), vector databases, prompt engineering tools, and more.\n\n3. **Set Up Environment**: You need a development environment that supports the technologies used by LangChain, typically Python with libraries like Langchain, Hugging Face Transformers, or other necessary dependencies.\n\n4. **Integrate APIs**: If you're using external models or services, set up API keys and integrate them into your project.\n\n5. **Design Workflows**: Define how data will flow through the system, from input to processing by language models to output. This might involve creating chains of different components.\n\n6. **Write Code**: Implement your design using LangChain's APIs and modules. Start with simple scripts or move on to more complex applications as you become comfortable.\n\n7. **Test and Iterate**: Use sample inputs to test your setup, analyze the outputs, and refine your implementation based on the results.\n\n8. **Deploy and Monitor**: Once satisfied with the performance, deploy your application to a server or cloud platform. Continuously monitor its performance and make adjustments as needed.\n\nRemember, LangChain is about combining different AI components effectively, so it's crucial to have a clear understanding of each part you're using and how they interact. Always refer to the official documentation for the most up-to-date guidance and examples.", response_metadata={'token_usage': {'completion_tokens': 344, 'prompt_tokens': 44, 'total_tokens': 388}, 'model_name': 'gpt-4', 'system_fingerprint': 'fp_ollama', 'finish_reason': 'stop', 'logprobs': None}, id='run-abe4f2fd-9d7c-4f08-8e48-d32ff173d6e3-0')

示例代码,

full_chain.invoke({"question": "whats 2 + 2"})

输出,

AIMessage(content='4', response_metadata={'token_usage': {'completion_tokens': 2, 'prompt_tokens': 23, 'total_tokens': 25}, 'model_name': 'gpt-4', 'system_fingerprint': 'fp_ollama', 'finish_reason': 'stop', 'logprobs': None}, id='run-3c6d5a95-cac9-4dc8-a600-63180f655196-0')

完结!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/583282.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Linux工具篇 之 vim概念 操作 及基础指令讲解

学校不大 创造神话 讲桌两旁 陨落的王 临时抱佛脚 佛踹我一脚 书山有路勤为径 游戏玩的很起劲 想要计算机学的好&#xff0c;我的博客列表是个宝 –❀–❀–❀–❀–❀–❀–❀–❀–❀–❀–❀–❀–❀–❀–❀–❀–❀–❀–❀-正文开始-❀–❀–❀–❀–❀–❀–❀–❀…

OceanBase开发者大会实录-杨传辉:携手开发者打造一体化数据库

本文来自2024 OceanBase开发者大会&#xff0c;OceanBase CTO 杨传辉的演讲实录—《携手开发者打造一体化数据库》。完整视频回看&#xff0c;请点击这里&#xff1e;> 各位 OceanBase 的开发者&#xff0c;大家上午好&#xff01;今天非常高兴能够在上海与大家再次相聚&…

Springboot+Vue项目-基于Java+MySQL的校园外卖服务系统(附源码+演示视频+LW)

大家好&#xff01;我是程序猿老A&#xff0c;感谢您阅读本文&#xff0c;欢迎一键三连哦。 &#x1f49e;当前专栏&#xff1a;Java毕业设计 精彩专栏推荐&#x1f447;&#x1f3fb;&#x1f447;&#x1f3fb;&#x1f447;&#x1f3fb; &#x1f380; Python毕业设计 &…

自动驾驶中的深度学习和计算机视觉

书籍&#xff1a;Applied Deep Learning and Computer Vision for Self-Driving Cars: Build autonomous vehicles using deep neural networks and behavior-cloning techniques 作者&#xff1a;Sumit Ranjan&#xff0c;Dr. S. Senthamilarasu 出版&#xff1a;Packt 书籍…

【GitHub】如何在github上提交PR(Pull Request) + 多个pr同时提交、互不干扰

【GitHub】如何在github上提交PR(Pull Request 写在最前面1. 准备工作1.1 注册 GitHub 账号1.2 了解 Git 基础1.3 找到一个项目 2. 创建你的 PR2.1 Fork 和克隆仓库2.2 创建一个新的分支2.3 进行更改2.4 推送更改到 GitHub2.5 创建 Pull Request 3. 优化你的 PR3.1 保持提交清晰…

Nacos 安全零信任实践

作者&#xff1a;柳遵飞 Nacos 作为配置中心经常存储一些敏感信息&#xff0c;但是由于误用导致安全风险&#xff0c;最常见的主要是以下两个问题&#xff1a; 1&#xff09;Nacos 暴露公网可以吗&#xff1f;不可以&#xff0c;因为 Nacos 定位是注册配置中心&#xff0c;是…

谷歌验证码识别/谷歌识别/Google/本地库识别/图像识别

谷歌识别 做这个有两种方式&#xff0c;一种是图像分类的方式&#xff0c;标注量大&#xff0c;识别率有局限性。 另外一种是通过上面的图和下面的小图做一个相似度匹配&#xff0c;做孪生网络。 谷歌验证方式比较丰富&#xff0c;有时候上面的小图没有&#xff0c;我们可以做…

力扣37题:回溯算法之解数独

编写一个程序&#xff0c;通过填充空格来解决数独问题。 数独的解法需 遵循如下规则&#xff1a; 数字 1-9 在每一行只能出现一次。数字 1-9 在每一列只能出现一次。数字 1-9 在每一个以粗实线分隔的 3x3 宫内只能出现一次。&#xff08;请参考示例图&#xff09; 数独部分空…

java-动态代理

为什么需要代理&#xff1f; 如何创建代理 注意&#xff1a;实现类和代理需要实现同一个接口 接口 public interface Star {String sing(String song);void dance(); }实现类 public class BigStar implements Star {private String name;public BigStar(String name) {this.…

开源博客项目Blog .NET Core源码学习(20:App.Hosting项目结构分析-8)

本文学习并分析App.Hosting项目中后台管理页面的个人资料页面、修改密码页面。 个人资料页面 个人资料页面用于显示和编辑个人信息&#xff0c;支持从本地上传个人头像。整个页面使用了layui中的表单、日期与时间选择、上传等样式或模块&#xff0c;通过layui.css文件设置样式…

精彩回顾|从 AI 到银幕:顶尖对话揭秘 AI 如何塑造影视新格局

4月17日&#xff0c;由万合天宜、三次元影业、NOVATECH、微软中国极客天团、微软 Reactor 共同推出的「从 AI 到银幕」顶尖对话在上海微软紫竹园区举办。中国内地著名导演、编剧、监制黄建新&#xff0c;微软&#xff08;中国&#xff09;有限公司首席技术官韦青&#xff0c;与…

基于SpringBoot+Vue高校实习管理系统的设计与实现

项目介绍&#xff1a; 如今社会上各行各业&#xff0c;都喜欢用自己行业的专属软件工作&#xff0c;互联网发展到这个时候&#xff0c;人们已经发现离不开了互联网。新技术的产生&#xff0c;往往能解决一些老技术的弊端问题。因为传统高校实习管理系统信息管理难度大&#xf…

atlas 500容器(ubuntu20.04)搭建

1.docker 及环境搭建略 2.宿主机驱动安装略 3.宿主机中能正确使用npu-smi 4.docker 拉取略 5.docker 容器启动 docker run -itd --device/dev/davinci0 --device/dev/davinci_manager --device/dev/devmm_svm --device/dev/hisi_hdc -v /run/board_cfg.ini:/run/b…

吴恩达2022机器学习专项课程(一)7.2 逻辑回归的简化成本函数

问题预览/关键词 本节课内容逻辑回归的损失函数简化之后的形式是&#xff1f;为什么可以简化&#xff1f;成本函数的通用形式是&#xff1f;逻辑回归成本函数的最终形式是&#xff1f;逻辑回归为什么用对数损失函数计算成本函数&#xff1f;为什么不直接给出逻辑回归损失函数的…

银河麒麟V10 ARM64 离线安装 新版Docker

查询当前发行版本 nkvers下载最新版本 卸载旧依赖 卸载已经安装的老版本 yum remove docker \containerd.io \docker-runc \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine \docker-compo…

【数据结构7-1-查找-线性-二分法-二叉树-哈希表】

目录 1 查找基本概念2 线性表的查找2.1 顺序查找2.2 二分法查找2.3 分块查找 3 树表的查询3.1 二叉排序树3.1.1 定义3.1.2 二叉树的建立、遍历、查找、增加、删除&#xff1a;3.1.3 代码实现&#xff1a; 3.2 平衡二叉树3.2.1 平横因子3.2.2 不平横树的调整-左旋3.2.3 不平横树…

c++高级篇(三) ——Linux下IO多路复用之poll模型

poll模型 前言 poll模型与select的实现原理相近&#xff0c;所以绝大数的原理其实可以参考select&#xff0c;我们这里对二者的相同点不做过多探究&#xff0c;如果有需要可以去看一下博主的上一篇文章&#xff1a; c高级篇(二) ——Linux下IO多路复用之select模型 这里我们只…

【Jenkins】持续集成与交付 (三):有关报错解决(Jenkins (2.387.3) or higher required)

🟣【Jenkins】持续集成与交付 (三):有关报错解决Jenkins (2.387.3) or higher required 一、Jenkins主页报错二、安装Jenkins插件报错三、解决过程(解压替换jenkins.war)四、重新访问登录💖The Begin💖点点关注,收藏不迷路💖 一、Jenkins主页报错 New version …

51单片机两个中断及中断嵌套

文章目录 前言一、中断嵌套是什么&#xff1f;二、两个同级别中断2.1 中断运行关系2.2 测试程序 三、两个不同级别中断实现中断嵌套3.1 中断运行关系3.2 测试程序 总结 前言 提示&#xff1a;这里可以添加本文要记录的大概内容&#xff1a; 课程需要&#xff1a; 提示&#x…

【电路笔记】-RC振荡器电路

RC振荡器电路 文章目录 RC振荡器电路1、概述2、RC 相移网络3、基本RC振荡器电路4、运算放大器RC振荡器5、运算放大器相位滞后RC振荡器电路6、RC振荡器示例11、概述 RC 振荡器使用放大器和 RC 反馈网络的组合,由于级之间的相移而产生输出振荡。 当单级晶体管放大器作为共发射…
最新文章