英文:
Anybody is able to run langchain gpt4all successfully?
问题
以下是您提供的代码的翻译部分:
# 以下代码片段来自 https://python.langchain.com/docs/modules/model_io/models/llms/integrations/gpt4all
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
local_path = (
"./models/ggml-gpt4all-l13b-snoozy.bin" # 用您期望的本地文件路径替换
)
# 回调函数支持逐标记流式传输
callbacks = [StreamingStdOutCallbackHandler()]
# verbose 参数需要传递给回调管理器
llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
# 如果您想使用自定义模型,请添加 backend 参数
# 查看 https://docs.gpt4all.io/gpt4all_python.html 以获取支持的后端
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
以下是错误消息的翻译:
发现模型文件位于 ./models/ggml-gpt4all-l13b-snoozy.bin
无效的模型文件
---------------------------------------------------------------------------
验证错误 Traceback (most recent call last)
Cell In[16], line 19
17 callbacks = [StreamingStdOutCallbackHandler()]
18 # verbose 参数需要传递给回调管理器
---> 19 llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
20 # 如果您想使用自定义模型,请添加 backend 参数
21 # 查看 https://docs.gpt4all.io/gpt4all_python.html 以获取支持的后端
22 llm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True)
File ~/anaconda3/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
验证错误: 1 个验证错误,GPT4All
__root__
无法实例化模型 (type=value_error)
请注意,这些翻译中包括了原始代码和错误消息的翻译。
英文:
The following piece of code is from https://python.langchain.com/docs/modules/model_io/models/llms/integrations/gpt4all
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
local_path = (
"./models/ggml-gpt4all-l13b-snoozy.bin" # replace with your desired local file path
)
# Callbacks support token-wise streaming
callbacks = [StreamingStdOutCallbackHandler()]
# Verbose is required to pass to the callback manager
llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
# If you want to use a custom model add the backend parameter
# Check https://docs.gpt4all.io/gpt4all_python.html for supported backends
llm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)`
Here is the error message:
Found model file at ./models/ggml-gpt4all-l13b-snoozy.bin
Invalid model file
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[16], line 19
17 callbacks = [StreamingStdOutCallbackHandler()]
18 # Verbose is required to pass to the callback manager
---> 19 llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
20 # If you want to use a custom model add the backend parameter
21 # Check https://docs.gpt4all.io/gpt4all_python.html for supported backends
22 llm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True)
File ~/anaconda3/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for GPT4All
__root__
Unable to instantiate model (type=value_error)
答案1
得分: 1
没有更多信息(例如版本、操作系统等),很难说这里的问题是什么。
首先要检查的是./models/ggml-gpt4all-l13b-snoozy.bin
是否有效。为此,您应该将本地文件的校验和与有效文件的校验和进行比较,您可以在这里找到:https://gpt4all.io/models/models.json。
请注意,您的模型不在文件中,并且在当前版本的gpt4all(1.0.2)中不受官方支持,因此您可能需要从他们的官方网站下载并使用GPT4All-13B-snoozy.ggmlv3.q4_0.bin
。如果校验和不正确,请删除旧文件并重新下载。
如果问题仍然存在,请尝试通过gpt4all直接加载模型,以确定问题是否来自文件/ gpt4all软件包或langchain软件包。
from gpt4all import GPT4All
model = GPT4All("ggml-gpt4all-l13b-snoozy.bin", model_path="./models/")
最后,不应同时调用第19行和第22行。正如注释中所述:如果您有预定义模型,请使用第19行,如果您有自定义模型,请使用第22行。
英文:
Without further info (e.g., versions, OS, ...), it is hard to say what the problem here is.
First thing to check is whether ./models/ggml-gpt4all-l13b-snoozy.bin
is valid. For this, you should compare the checksum of the local file with the valid ones, which you can find here: https://gpt4all.io/models/models.json.
Note that your model is not in the file, and is not officially supported in the current version of gpt4all (1.0.2) anymore, so you might want to download and use GPT4All-13B-snoozy.ggmlv3.q4_0.bin
from their official website. If the checksum is not correct, delete the old file and re-download.
If the problem persists, try to load the model directly via gpt4all to pinpoint if the problem comes from the file / gpt4all package or langchain package.
from gpt4all import GPT4All
model = GPT4All("ggml-gpt4all-l13b-snoozy.bin", model_path="./models/")
Finally, you are not supposed to call both line 19 and line 22. As the comments state: If you have a predefined model, use 19, if you have a custom one, use line 22.
答案2
得分: 0
首先,您需要在您的电脑上安装"ggml-gpt4all-l13b-snoozy.bin"文件:
from gpt4all import GPT4All
GPT4All(model_name = "ggml-gpt4all-l13b-snoozy.bin")
然后,您可以使用langchain.llms来访问您的模型
警告:请使用gpt4all = 0.3.5
英文:
First you have to installe "ggml-gpt4all-l13b-snoozy.bin" on your pc:
from gpt4all import GPT4All
GPT4All(model_name = "ggml-gpt4all-l13b-snoozy.bin")
then you can use langchain.llms to access to your model
WARNING : use gpt4all = 0.3.5
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论