英文:
Langchain: Custom Output Parser not working with ConversationChain
问题
我正在使用langchain的ConversationChain创建一个聊天机器人,因此需要有会话记忆。然而,在每次回复的末尾,它会换行并写一堆无用的内容。因此,我创建了自定义的输出解析器来去除这些无用的内容。但是,它出现了验证错误。我对langchain还很陌生,所以任何帮助都将不胜感激。
这是它给我的错误消息:
ValidationError: ConversationChain的验证错误,具体错误如下:
output_parser
值不是一个有效的字典(类型错误.dict)
英文:
I am creating a chatbot with langchain's ConversationChain, thus, it needs conversation memory. However, at the end of each of its response, it makes a new line and writes a bunch of gibberish. Thus, I created my custom output parser to remove this gibberish. However, it gives a validation error. I am new to langchain, so any help would be appreciated.
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
class MyOutputParser:
def __init__(self):
pass
def parse(self, output):
cut_off = output.find("\n", 3)
# delete everything after new line
return output[:cut_off]
template = """You will answer the following questions the best you can, being as informative and factual as possible.
If you don't know, say you don't know.
Current conversation:
{history}
Human: {input}
AI Assistant:"""
the_output_parser=MyOutputParser()
print(type(the_output_parser))
PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)
conversation = ConversationChain(
prompt=PROMPT,
llm=local_llm,
memory=ConversationBufferWindowMemory(k=4),
return_final_only=True,
verbose=False,
output_parser=the_output_parser,
)
This is the error it gives me:
ValidationError: 1 validation error for ConversationChain
output_parser
value is not a valid dict (type=type_error.dict)
答案1
得分: 1
I'm not sure exactly what you're trying to do, and this area seems to be highly dependent on the version of LangChain you're using, but it seems that your output parser does not follow the method signatures (nor does it inherit from) BaseLLMOutputParser, as it should.
For LangChain 0.0.261, to fix your specific question about the output parser, try:
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain import PromptTemplate
from langchain.chains import ConversationChain
from langchain.schema.output_parser import BaseLLMOutputParser
class MyOutputParser(BaseLLMOutputParser):
def __init__(self):
super().__init__()
def parse_result(self, output):
cut_off = output.find("\n", 3)
# delete everything after new line
return output[:cut_off]
英文:
I'm not sure exactly what you're trying to do, and this area seems to be highly dependent on the version of LangChain you're using, but it seems that your output parser does not follow the method signatures (nor does it inherit from) BaseLLMOutputParser, as it should.
For LangChain 0.0.261, to fix your specific question about the output parser, try:
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain import PromptTemplate
from langchain.chains import ConversationChain
from langchain.schema.output_parser import BaseLLMOutputParser
class MyOutputParser(BaseLLMOutputParser):
def __init__(self):
super().__init__()
def parse_result(self, output):
cut_off = output.find("\n", 3)
# delete everything after new line
return output[:cut_off]
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论