浪链部分构建强大的链和代理
在这篇综合指南中,我们将深入探讨langchain的世界,重点关注构建强大的链和代理。我们将涵盖从理解链的基础知识到将其与大型语言模型(llm)相结合以及引入用于自主决策的复杂代理的所有内容。
langchain 中的链是按特定顺序处理数据的操作或任务序列。它们允许模块化和可重用的工作流程,从而更轻松地处理复杂的数据处理和语言任务。链是创建复杂的人工智能驱动系统的构建块。
langchain 提供多种类型的链,每种类型适合不同的场景:
顺序链:这些链以线性顺序处理数据,其中一个步骤的输出作为下一步的输入。它们非常适合简单、分步的流程。
映射/归约链:这些链涉及将函数映射到一组数据,然后将结果归约为单个输出。它们非常适合并行处理大型数据集。
路由器链:这些链根据特定条件将输入直接输入到不同的子链,从而允许更复杂的分支工作流程。
创建自定义链涉及定义将成为链一部分的特定操作或功能。这是自定义顺序链的示例:
from langchain.chains import llmchain from langchain.llms import openai from langchain.prompts import prompttemplate class customchain: def __init__(self, llm): self.llm = llm self.steps = [] def add_step(self, prompt_template): prompt = prompttemplate(template=prompt_template, input_variables=["input"]) chain = llmchain(llm=self.llm, prompt=prompt) self.steps.append(chain) def execute(self, input_text): for step in self.steps: input_text = step.run(input_text) return input_text # initialize the chain llm = openai(temperature=0.7) chain = customchain(llm) # add steps to the chain chain.add_step("summarize the following text in one sentence: {input}") chain.add_step("translate the following english text to french: {input}") # execute the chain result = chain.execute("langchain is a powerful framework for building ai applications.") print(result)
此示例创建一个自定义链,首先汇总输入文本,然后将其翻译为法语。
chains 可以与提示和 llm 无缝集成,以创建更强大、更灵活的系统。这是一个例子:
from langchain import prompttemplate, llmchain from langchain.llms import openai from langchain.chains import simplesequentialchain llm = openai(temperature=0.7) # first chain: generate a topic first_prompt = prompttemplate( input_variables=["subject"], template="generate a random {subject} topic:" ) first_chain = llmchain(llm=llm, prompt=first_prompt) # second chain: write a paragraph about the topic second_prompt = prompttemplate( input_variables=["topic"], template="write a short paragraph about {topic}:" ) second_chain = llmchain(llm=llm, prompt=second_prompt) # combine the chains overall_chain = simplesequentialchain(chains=[first_chain, second_chain], verbose=true) # run the chain result = overall_chain.run("science") print(result)
这个示例创建了一个链,该链生成一个随机科学主题,然后写一个关于它的段落。
要调试和优化链-llm 交互,您可以使用详细参数和自定义回调:
from langchain.callbacks import stdoutcallbackhandler from langchain.chains import llmchain from langchain.llms import openai from langchain.prompts import prompttemplate class customhandler(stdoutcallbackhandler): def on_llm_start(self, serialized, prompts, **kwargs): print(f"llm started with prompt: {prompts[0]}") def on_llm_end(self, response, **kwargs): print(f"llm finished with response: {response.generations[0][0].text}") llm = openai(temperature=0.7, callbacks=[customhandler()]) template = "tell me a {adjective} joke about {subject}." prompt = prompttemplate(input_variables=["adjective", "subject"], template=template) chain = llmchain(llm=llm, prompt=prompt, verbose=true) result = chain.run(adjective="funny", subject="programming") print(result)
此示例使用自定义回调处理程序来提供有关 llm 输入和输出的详细信息。
浪链中的代理是自治实体,可以使用工具并做出决策来完成任务。他们将法学硕士与外部工具相结合来解决复杂的问题,从而实现更具动态性和适应性的人工智能系统。
langchain 提供了多种内置代理,例如 zero-shot-react-description 代理:
from langchain.agents import load_tools, initialize_agent, agenttype from langchain.llms import openai llm = openai(temperature=0) tools = load_tools(["wikipedia", "llm-math"], llm=llm) agent = initialize_agent( tools, llm, agent=agenttype.zero_shot_react_description, verbose=true ) result = agent.run("what is the square root of the year plato was born?") print(result)
此示例创建一个可以使用维基百科并执行数学计算来回答复杂问题的代理。
您可以通过定义自己的工具和代理类来创建自定义代理。这允许针对特定任务或领域定制高度专业化的代理。
这是自定义代理的示例:
from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent from langchain.prompts import StringPromptTemplate from langchain import OpenAI, SerpAPIWrapper, LLMChain from typing import List, Union from langchain.schema import AgentAction, AgentFinish import re # Define custom tools search = SerpAPIWrapper() tools = [ Tool( name="Search", func=search.run, description="Useful for answering questions about current events" ) ] # Define a custom prompt template template = """Answer the following questions as best you can: {input} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} Thought: To answer this question, I need to search for current information. {agent_scratchpad}""" class CustomPromptTemplate(StringPromptTemplate): template: str tools: List[Tool] def format(self, **kwargs) -> str: intermediate_steps = kwargs.pop("intermediate_steps") thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += f"\nObservation: {observation}\nThought: " kwargs["agent_scratchpad"] = thoughts kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools]) return self.template.format(**kwargs) prompt = CustomPromptTemplate( template=template, tools=tools, input_variables=["input", "intermediate_steps"] ) # Define a custom output parser class CustomOutputParser: def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]: if "Final Answer:" in llm_output: return AgentFinish( return_values={"output": llm_output.split("Final Answer:")[-1].strip()}, log=llm_output, ) action_match = re.search(r"Action: (\w+)", llm_output, re.DOTALL) action_input_match = re.search(r"Action Input: (.*)", llm_output, re.DOTALL) if not action_match or not action_input_match: raise ValueError(f"Could not parse LLM output: `{llm_output}`") action = action_match.group(1).strip() action_input = action_input_match.group(1).strip(" ").strip('"') return AgentAction(tool=action, tool_input=action_input, log=llm_output) # Create the custom output parser output_parser = CustomOutputParser() # Define the LLM chain llm = OpenAI(temperature=0) llm_chain = LLMChain(llm=llm, prompt=prompt) # Define the custom agent agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=[tool.name for tool in tools] ) # Create an agent executor agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, , verbose=True) # Run the agent result = agent_executor.run(“What’s the latest news about AI?”) print(result)
langchain 的链和代理为构建复杂的人工智能驱动系统提供了强大的功能。当与大型语言模型 (llm) 集成时,它们可以创建适应性强的智能应用程序,旨在解决各种任务。当您在 langchain 之旅中不断进步时,请随意尝试不同的链类型、代理设置和自定义模块,以充分利用该框架的潜力。
以上就是浪链部分构建强大的链和代理的详细内容,更多请关注php中文网其它相关文章!