Langchain:自定义输出解析器在与ConversationChain一起使用时无法正常工作。

huangapple go评论149阅读模式
英文:

Langchain: Custom Output Parser not working with ConversationChain

问题

  1. 我正在使用langchainConversationChain创建一个聊天机器人因此需要有会话记忆然而在每次回复的末尾它会换行并写一堆无用的内容因此我创建了自定义的输出解析器来去除这些无用的内容但是它出现了验证错误我对langchain还很陌生所以任何帮助都将不胜感激
  2. 这是它给我的错误消息
  3. ValidationError: ConversationChain的验证错误具体错误如下
  4. output_parser
  5. 值不是一个有效的字典类型错误.dict
英文:

I am creating a chatbot with langchain's ConversationChain, thus, it needs conversation memory. However, at the end of each of its response, it makes a new line and writes a bunch of gibberish. Thus, I created my custom output parser to remove this gibberish. However, it gives a validation error. I am new to langchain, so any help would be appreciated.

  1. from langchain.chains.conversation.memory import ConversationBufferWindowMemory
  2. from langchain.chains import ConversationChain
  3. from langchain.memory import ConversationBufferMemory
  4. class MyOutputParser:
  5. def __init__(self):
  6. pass
  7. def parse(self, output):
  8. cut_off = output.find("\n", 3)
  9. # delete everything after new line
  10. return output[:cut_off]
  11. template = """You will answer the following questions the best you can, being as informative and factual as possible.
  12. If you don't know, say you don't know.
  13. Current conversation:
  14. {history}
  15. Human: {input}
  16. AI Assistant:"""
  17. the_output_parser=MyOutputParser()
  18. print(type(the_output_parser))
  19. PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)
  20. conversation = ConversationChain(
  21. prompt=PROMPT,
  22. llm=local_llm,
  23. memory=ConversationBufferWindowMemory(k=4),
  24. return_final_only=True,
  25. verbose=False,
  26. output_parser=the_output_parser,
  27. )

This is the error it gives me:

  1. ValidationError: 1 validation error for ConversationChain
  2. output_parser
  3. value is not a valid dict (type=type_error.dict)

答案1

得分: 1

I'm not sure exactly what you're trying to do, and this area seems to be highly dependent on the version of LangChain you're using, but it seems that your output parser does not follow the method signatures (nor does it inherit from) BaseLLMOutputParser, as it should.

For LangChain 0.0.261, to fix your specific question about the output parser, try:

  1. from langchain.chains.conversation.memory import ConversationBufferWindowMemory
  2. from langchain import PromptTemplate
  3. from langchain.chains import ConversationChain
  4. from langchain.schema.output_parser import BaseLLMOutputParser
  5. class MyOutputParser(BaseLLMOutputParser):
  6. def __init__(self):
  7. super().__init__()
  8. def parse_result(self, output):
  9. cut_off = output.find("\n", 3)
  10. # delete everything after new line
  11. return output[:cut_off]
英文:

I'm not sure exactly what you're trying to do, and this area seems to be highly dependent on the version of LangChain you're using, but it seems that your output parser does not follow the method signatures (nor does it inherit from) BaseLLMOutputParser, as it should.

For LangChain 0.0.261, to fix your specific question about the output parser, try:

  1. from langchain.chains.conversation.memory import ConversationBufferWindowMemory
  2. from langchain import PromptTemplate
  3. from langchain.chains import ConversationChain
  4. from langchain.schema.output_parser import BaseLLMOutputParser
  5. class MyOutputParser(BaseLLMOutputParser):
  6. def __init__(self):
  7. super().__init__()
  8. def parse_result(self, output):
  9. cut_off = output.find("\n", 3)
  10. # delete everything after new line
  11. return output[:cut_off]

huangapple
  • 本文由 发表于 2023年8月11日 00:17:32
  • 转载请务必保留本文链接:https://go.coder-hub.com/76877589.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定