英文:
tenacity.RetryError: RetryError[<Future at 0x7f89bc35eb90 state=finished raised AuthenticationError>]
问题
I am trying to deploy an app made with streamlit (using also streamlit_chat and streamlit_authenticator). This app is making use of llama-index to create a query engine incorporating chatgpt api. When I state "streamlit run app.py" in my computer, everything works fine, but when I deploy it the following error raises:
2023-06-07 16:45:28.682 Uncaught app exception
Traceback (most recent call last):
File "/home/appuser/venv/lib/python3.10/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/openai.py", line 106, in get_embedding
return openai.Embedding.create(input=[text], engine=engine)["data"][0]["embedding"]
File "/home/appuser/venv/lib/python3.10/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "/home/appuser/venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
) = cls.__prepare_create_request(
File "/home/appuser/venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 106, in __prepare_create_request
requestor = api_requestor.APIRequestor(
File "/home/appuser/venv/lib/python3.10/site-packages/openai/api_requestor.py", line 138, in init
self.api_key = key or util.default_api_key()
File "/home/appuser/venv/lib/python3.10/site-packages/openai/util.py", line 186, in default_api_key
raise openai.error.AuthenticationError(
openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key =
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 561, in _run_script
self._session_state.on_script_will_rerun(rerun_data.widget_states)
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/state/safe_session_state.py", line 68, in on_script_will_rerun
self._state.on_script_will_rerun(latest_widget_states)
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 482, in on_script_will_rerun
self._call_callbacks()
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 495, in _call_callbacks
self._new_widget_state.call_callback(wid)
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 247, in call_callback
callback(*args, **kwargs)
File "/app/bajoquetumgpt/docsv2.py", line 35, in generate_answer
response = query_engine.query(user_msg)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/indices/query/base.py", line 20, in query
return self._query(str_or_query_bundle)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/query_engine/retriever_query_engine.py", line 139, in _query
nodes = self._retriever.retrieve(query_bundle)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/indices/base_retriever.py", line 21, in retrieve
return self._retrieve(str_or_query_bundle)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/token_counter/token_counter.py", line 78, in wrapped_llm_predict
f_return_val = f(_self, *args, **kwargs)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/indices/vector_store/retrievers.py", line 62, in _retrieve
self._service_context.embed_model.get_agg_embedding_from_queries(
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/base.py", line 83, in get_agg_embedding_from_queries
query_embeddings = [self.get_query_embedding(query) for query in queries]
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/base.py", line 83, in
query_embeddings = [self.get_query_embedding(query) for query in queries]
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/base.py", line 72, in get_query_embedding
query_embedding = self._get_query_embedding(query)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/openai.py", line 223, in _get_query_embedding
return get_embedding(query, engine=engine)
File "/home/appuser/venv/lib/python3.10/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/home/appuser/venv/lib/python3.10/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
File "/home/appuser/venv/lib/python3.10/site-packages/tenacity/init.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7f89bc35eb90 state=finished raised AuthenticationError>]
The code is private but I can show the part where the authenticator and the query engine are used:
import yaml
from yaml.loader import SafeLoader
with open('./config.yaml') as file:
config = yaml.load(file, Loader=SafeLoader)
authenticator = stauth.Authenticate(
config['credentials'],
config['cookie']['name'],
config['cookie']['key'],
config['cookie']['expiry_days'],
config['preauthorized']
)
name, authentication_status, username = authenticator.login('Login', 'main')
print(username, name, authentication_status)
if authentication_status:
authenticator.logout('Logout', 'main')
st.write(f'Welcome *{name}*')
elif authentication_status ==
<details>
<summary>英文:</summary>
I am trying to deploy an app made with streamlit (using also streamlit_chat and streamlit_authenticator). This app is making use of llama-index to create a query engine incorporating chatgpt api. When I state "streamlit run app.py" in my computer, everything works fine, but when I deploy it the following error raises:
2023-06-07 16:45:28.682 Uncaught app exception
Traceback (most recent call last):
File "/home/appuser/venv/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/openai.py", line 106, in get_embedding
return openai.Embedding.create(input=[text], engine=engine)["data"][0]["embedding"]
File "/home/appuser/venv/lib/python3.10/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "/home/appuser/venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
) = cls.__prepare_create_request(
File "/home/appuser/venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 106, in __prepare_create_request
requestor = api_requestor.APIRequestor(
File "/home/appuser/venv/lib/python3.10/site-packages/openai/api_requestor.py", line 138, in __init__
self.api_key = key or util.default_api_key()
File "/home/appuser/venv/lib/python3.10/site-packages/openai/util.py", line 186, in default_api_key
raise openai.error.AuthenticationError(
openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 561, in _run_script
self._session_state.on_script_will_rerun(rerun_data.widget_states)
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/state/safe_session_state.py", line 68, in on_script_will_rerun
self._state.on_script_will_rerun(latest_widget_states)
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 482, in on_script_will_rerun
self._call_callbacks()
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 495, in _call_callbacks
self._new_widget_state.call_callback(wid)
File "/home/appuser/venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 247, in call_callback
callback(*args, **kwargs)
File "/app/bajoquetumgpt/docsv2.py", line 35, in generate_answer
response = query_engine.query(user_msg)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/indices/query/base.py", line 20, in query
return self._query(str_or_query_bundle)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/query_engine/retriever_query_engine.py", line 139, in _query
nodes = self._retriever.retrieve(query_bundle)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/indices/base_retriever.py", line 21, in retrieve
return self._retrieve(str_or_query_bundle)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/token_counter/token_counter.py", line 78, in wrapped_llm_predict
f_return_val = f(_self, *args, **kwargs)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/indices/vector_store/retrievers.py", line 62, in _retrieve
self._service_context.embed_model.get_agg_embedding_from_queries(
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/base.py", line 83, in get_agg_embedding_from_queries
query_embeddings = [self.get_query_embedding(query) for query in queries]
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/base.py", line 83, in <listcomp>
query_embeddings = [self.get_query_embedding(query) for query in queries]
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/base.py", line 72, in get_query_embedding
query_embedding = self._get_query_embedding(query)
File "/home/appuser/venv/lib/python3.10/site-packages/llama_index/embeddings/openai.py", line 223, in _get_query_embedding
return get_embedding(query, engine=engine)
File "/home/appuser/venv/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/home/appuser/venv/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/home/appuser/venv/lib/python3.10/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7f89bc35eb90 state=finished raised AuthenticationError>]
The code is private but I can show the part where the authenticator and the query engine are used:
import yaml
from yaml.loader import SafeLoader
with open('./config.yaml') as file:
config = yaml.load(file, Loader=SafeLoader)
authenticator = stauth.Authenticate(
config['credentials'],
config['cookie']['name'],
config['cookie']['key'],
config['cookie']['expiry_days'],
config['preauthorized']
)
name, authentication_status, username = authenticator.login('Login', 'main')
print(username, name, authentication_status)
if authentication_status:
authenticator.logout('Logout', 'main')
st.write(f'Welcome *{name}*')
elif authentication_status == False:
st.error('Username/password is incorrect')
elif authentication_status == None:
st.warning('Please enter your username and password')
FIRST_OUTPUT="""Hello!"""
if authentication_status:
API_KEY = config['credentials']['usernames'][username].get('openaiapi', "")
st.text("""First text""")
text_input_container = st.empty()
if API_KEY=="":
API_KEY = text_input_container.text_input(label='Introduce your OpenAI API Key:', label_visibility = 'hidden', placeholder='Introduce your OpenAI API Key:')
if API_KEY != "":
text_input_container.empty()
os.environ['OPENAI_API_KEY'] = API_KEY
And it continues as:
if API_KEY != '':
# Load the index from your saved index.json file
storage_context = StorageContext.from_defaults(persist_dir='./storage')
# load index
index = load_index_from_storage(storage_context)
query_engine = index.as_query_engine()
if "history" not in st.session_state:
st.session_state.history = initial_history
c = st.expander("Open to see the previous messages!")
for i, chat in enumerate(st.session_state.history):
if i < len(st.session_state.history)-6:
with c:
message(**chat, key=str(i)) #unpacking
else:
message(**chat, key=str(i)) #unpacking
st.text_input("You: ", "", key="input_text", on_change = generate_answer)
The function generate_answer is:
def generate_answer():
user_msg = st.session_state.input_text
st.session_state.input_text = ""
response = query_engine.query(user_msg)
st.session_state.history.append(
{"message": user_msg, "is_user":True,
"avatar_style": "fun-emoji",
"seed": 4}
)
st.session_state.history.append(
{"message": str(response).strip(), "is_user":False,
"avatar_style": "bottts-neutral",
"seed": 36}
)
I would love to receive any help with it.
</details>
# 答案1
**得分**: 9
You are probably setting the API key like this:
```python
os.environ["OPENAI_API_KEY"] = 'YOUR_KEY'
The problem here is because you are filling this variable just in your local environment. To send it to OpenAI you have to do something like this:
import openai
openai.api_key = os.environ["OPENAI_API_KEY"]
I'll let a piece of my code here to you understand better.
from llama_index import SimpleDirectoryReader, GPTVectorStoreIndex, LLMPredictor, ServiceContext, StorageContext, load_index_from_storage
from langchain import OpenAI
import gradio as gr
import os
import openai
# Here I fill my LOCAL environment variable
os.environ["OPENAI_API_KEY"] = 'YOUR_KEY'
def construct_index(directory_path):
# And here I fill the key to openAI
openai.api_key = os.environ["OPENAI_API_KEY"]
num_outputs = 512
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.7, model_name="text-davinci-003", max_tokens=num_outputs))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
docs = SimpleDirectoryReader(directory_path).load_data()
index = GPTVectorStoreIndex.from_documents(docs, service_context=service_context)
index.storage_context.persist()
return index
That's worked for me.
英文:
You are probably setting the API key like this:
os.environ["OPENAI_API_KEY"] = 'YOUR_KEY'
The problem here is because you are filling this variable just in your local environment. To send it to OpenAI you have to do something like this:
import openai
openai.api_key = os.environ["OPENAI_API_KEY"]
I'll let a piece of my code here to you understand better.
from llama_index import SimpleDirectoryReader, GPTVectorStoreIndex, LLMPredictor, ServiceContext, StorageContext, load_index_from_storage
from langchain import OpenAI
import gradio as gr
import os
import openai
# Here I fill my LOCAL environment variable
os.environ["OPENAI_API_KEY"] = 'YOUR_KEY'
def construct_index(directory_path):
# And here I fill the key to openAI
openai.api_key = os.environ["OPENAI_API_KEY"]
num_outputs = 512
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.7, model_name="text-davinci-003", max_tokens=num_outputs))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
docs = SimpleDirectoryReader(directory_path).load_data()
index = GPTVectorStoreIndex.from_documents(docs, service_context=service_context)
index.storage_context.persist()
return index
That's worked for me.
答案2
得分: 3
openai.api_key = "key"
英文:
os.environ["OPENAI_API_KEY"] = 'YOUR_KEY'
change this
openai.api_key = "key"
to this
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论