LangChain 在 Streamlit 应用程序中与 ConversationBufferMemory 不起作用。

huangapple go评论67阅读模式
英文:

LangChain with ConversationBufferMemory in Streamlit application does not work

问题

我有一个完全正常工作的Streamlit聊天机器人,但它不会记住之前的聊天记录。我尝试使用langchain的ConversationBufferMemory来添加它,但似乎不起作用。

这是我创建的聊天机器人的示例代码:

import streamlit as st
from streamlit_chat import message

from langchain.chains import ConversationChain
from langchain.llms import OpenAI
from langchain.chat_models import AzureChatOpenAI

from langchain.memory import ConversationBufferMemory

from langchain.prompts import (
    ChatPromptTemplate, 
    MessagesPlaceholder, 
    SystemMessagePromptTemplate, 
    HumanMessagePromptTemplate
)

# 以下是代码的其余部分,不需要翻译

看起来机器人似乎不理会ConversationBufferMemory。任何帮助将不胜感激。

英文:

I have a streamlit chatbot that works perfectly fine but does not remember previous chat history. I was trying to add it with langchain ConversationBufferMemory but it does not seem to work.

Here is a sample of chatbot I created:

import streamlit as st
from streamlit_chat import message

from langchain.chains import ConversationChain
from langchain.llms import OpenAI
from langchain.chat_models import AzureChatOpenAI

from langchain.memory import ConversationBufferMemory

from langchain.prompts import (
    ChatPromptTemplate, 
    MessagesPlaceholder, 
    SystemMessagePromptTemplate, 
    HumanMessagePromptTemplate
)


prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
    MessagesPlaceholder(variable_name="history"),
    HumanMessagePromptTemplate.from_template("{input}")
])


def load_chain(prompt):
    """Logic for loading the chain you want to use should go here."""
    llm = AzureChatOpenAI(
                    deployment_name = 'gpt-35-turbo',
                    model_name = 'gpt-35-turbo',
                    temperature = 0,
                    openai_api_key = '.....',
                    openai_api_base = '.....',
                    openai_api_version = "2023-05-15", 
                    openai_api_type="azure"
                    )
    memory = ConversationBufferMemory(return_messages=True)
    chain = ConversationChain(
        llm=llm,
        verbose=True,
        prompt=prompt,
        memory=memory
    )
    return chain

chain = load_chain(prompt)

# From here down is all the StreamLit UI.
st.set_page_config(page_title="LangChain Demo", page_icon=":robot:")
st.header("LangChain Demo")

if "generated" not in st.session_state:
    st.session_state["generated"] = []

if "past" not in st.session_state:
    st.session_state["past"] = []

if "history" not in st.session_state:
    st.session_state["history"] = []

def get_text():
    input_text = st.text_input("You: ", "Hello, how are you?", key="input")
    return input_text


user_input = get_text()

if user_input:
    output = chain.run(input=user_input, history=st.session_state["history"])
    st.session_state["history"].append((user_input, output))
    st.session_state.past.append(user_input)
    st.session_state.generated.append(output)
    st.write(st.session_state["history"])

if st.session_state["generated"]:

    for i in range(len(st.session_state["generated"]) - 1, -1, -1):
        message(st.session_state["generated"][i], key=str(i))
        message(st.session_state["past"][i], is_user=True, key=str(i) + "_user")

It looks like bot ignores ConversationBufferMemory for some reason. Any help would be appreciated.

答案1

得分: 1

@hiper2d的回答表达得很清楚。我之前也遇到了相同的问题,事后我应该知道的。

每当需要在屏幕上更新内容时,Streamlit会从头到尾重新运行整个Python脚本。这意味着ConversationalBufferMemory会在每次刷新时初始化。如果我们要将其存储在会话状态中,我不认为需要内存抽象。

但是,如果你确实想这样做,你可以设置一个会话状态:

if "messages" not in st.session_state:
    st.session_state.messages = []

# 在处理过程中
st.session_state.messages.append({"role": "assistant", "content": output_from_llm})
英文:

@hiper2d's answer said it clearly. I was facing the same issue for the past hour. In hindsight, I should have known.

Any time something must be updated on the screen, Streamlit reruns your entire Python script from top to bottom. This means, the ConversationalBufferMemory will be initiated every time we refresh. If we are going to store it in a session state, I don't see the use of Memory abstractions.

But, if you do want to do that, you can set up a session state:

if "messages" not in st.session_state:
    st.session_state.messages = []

# While processing
st.session_state.messages.append({"role" : "assistant", "content" : output_from_llm})

huangapple
  • 本文由 发表于 2023年7月11日 13:23:34
  • 转载请务必保留本文链接:https://go.coder-hub.com/76658904.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定