英文:
How to prevent hallucinations using OpenAI's gpt-3.5-turbo API?
问题
我正在尝试制作一个个人招聘助手工具,以帮助我自动回答职位申请中的问题,比如“为什么你想在这里工作?”或“写一封求职信,说明你的经验与这个职位的相关性”。
我将我的简历和一个问题与回答部分作为API调用的上下文,将问题作为提示。总体而言,这个系统运作良好,但它经常会出现幻觉,比如它会说我有比我的简历中实际列出的经验年限更多,或者它会说我是某些我从未使用或在上下文中列出的技术的专家。
我尝试了通过以下代码明确提示模型,但并不成功:
def prepare_context(self):
context = (
"在这种情况下,您的角色是 "
"充当招聘助手,帮助回答面试问题。 "
"您的回答应基于简历和 "
"以前的问答中提供的信息,不应出现任何虚构的额外信息或经验。\n\n"
"我的简历:\n\n"
f"{self.resume}\n\n"
"我以前回答的问题:\n"
)
for q, a in self.qa_list:
context += f"Q: {q}\nA: {a}\n"
context += (
"\n回答以下问题时,请记住以第一人称回答,就像您是求职者一样。回答应该简洁、真实,并且仅基于提供的信息。"
"不要创造或推断出简历或以前问答中没有明确提到的任何额外经验或技能。"
"不要生成任何不包含在简历或以前的问题和答案中的信息。"
"请记住,您正在代表我回答所有后续问题。"
)
return context
希望这可以帮助您更好地明确模型的提示。
英文:
I am attempting to make a personal recruiting assistant tool to help me automate the process of answering questions for job applications like "Why do you want to work here?" or "Write a cover letter saying how your experience relates to this position.".
I am using my resume and a Q&A section as the context for the API call and the question as the prompt. Over all, the system works well but it hallucinates my experience often. For example, it will state I have more years of experience than is actually outlined in my resume or it will say I am an expert in technologies that I have never used or listed in my context.
I have tried explicitly prompting the model through this code but it has not been successful:
def prepare_context(self):
context = (
"Your role in this instance is "
"to act as a recruiting assistant and help answer interview questions. "
"Your responses should be based on the information provided in the resume and "
"previous Q&As without hallucinating additional information or experience.\n\n"
"My Resume:\n\n"
f"{self.resume}\n\n"
"My Previously answered questions:\n"
)
for q, a in self.qa_list:
context += f"Q: {q}\nA: {a}\n"
context += (
"\nWhen answering the following questions, remember to answer "
"in the first person as if you were the job applicant. The responses "
"should be concise, truthful, and based solely on the given information. "
"Do not create or infer any additional experiences or skills that are "
"not explicitly mentioned in the resume or previous Q&As. "
"Do not generate any information that is not included in the resume or previous questions and answers. "
"Remember, you are answering as me "
"so answer all further questions from my perspective.\n"
)
return context
答案1
得分: 1
你无法
除非我漏看了一个头条新闻,目前没有办法阻止任何大型语言模型产生幻觉。如果你打算这样做,你要么需要接受无论出现什么错误都无所谓,要么每次都需要仔细检查AI的输出。你可以创建一个第二阶段的自动错误检查器,但无论你如何实施,仍然会有一些错误通过,让你面临要么接受错误,要么重新审查所有内容的选择。
总的来说,我只会使用生成型AI来自动化低风险的任务。申请工作不是我认为低风险的事情,因为“幻觉”可能会对你的专业声誉和未来的就业能力产生严重后果。这只是我的个人看法。
英文:
You can’t
Unless I missed a headline there’s currently no way to prevent hallucinations with any large language model. If you’re going to do this you either need to be ok with random errors no matter what they are, or you’ll need to double check all of the AI’s output every time. You could create a second-stage automated error checker, but no matter how you implement it some errors will still get through, leaving you with the choice of either accepting the errors or reviewing everything.
In general I would only use generative AI to automate low risk tasks. Applying for jobs isn’t something I’d consider low-risk as “hallucinations” could have severe consequences for your professional reputation and future employability. Just my two cents.
答案2
得分: 0
有一种方法可以通过检索增强生成过程来消除LLM的幻觉。Pinecone和langchain单独不足以做到这一点。需要对内容进行预处理。EyelevelAI已经开发了一个名为GroundX的解决方案。您可以从GroundX.ai请求API访问。
英文:
There is a way to eliminate LLM hallucination with a Retrieval Augmented Generation process. Pinecone and langchain are not sufficient on their own. Pre-processing of content is necessary.EyelevelAI has develop a solution called GroundX.You can request API access from GroundX.ai
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论