英文:
How to make OpenAI stop prepending "A:" or "Answer:" to its answers?
问题
有时候,我的OpenAI API调用会返回带有"A:"或"Answer:"前缀的答案。由于我不需要这个前缀,我尝试通过修改系统消息来明确告诉它不要这样做,例如:
You are ChatbotAssistant, an automated service to answer questions of a website visitors.
Do not prepend "A:" or "Answer:" to your answers.
You respond in a short, very conversational friendly style. If you can't find an answer, provide no answer and apologize.
但是没有效果。
我知道我可以在JavaScript中处理这个问题,例如:
let cleanedText = responseText
if (responseText.startsWith('A:') || responseText.startsWith('Answer:')) {
cleanedText = responseText.replace(/^(A:|Answer:)\s*/, '')
}
但是是否有OpenAI的解决方案呢?谢谢。
英文:
Sometimes, my OpenAI API on call like
const completion = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [
{
role: 'system',
content: `You are ChatbotAssistant, an automated service to answer questions of a website visitors.` +
`You respond in a short, very conversational friendly style. If you can't find an answer, provide no answer and apologize.`
},
{role: 'user', content: userQuestion}
]
})
const responseText = completion.data.choices[0].message.content
gives answer with "A:" or "Answer:" prepended. Since I don't need that, I tried to instruct it explicitly not to do it, by changing system message like:
`You are ChatbotAssistant, an automated service to answer questions of a website visitors.` +
`Do not prepend "A:" or "Answer:" to your answers` +
`You respond in a short, very conversational friendly style. If you can't find an
answer, provide no answer and apologize.`
but to no effect.
I know I can handle this in Javascript like e.g.
let cleanedText = responseText
if (responseText.startsWith('A:') || responseText.startsWith('Answer:')) {
cleanedText = responseText.replace(/^(A:|Answer:)\s*/, '')
}
but is there OpenAI solution to this? Thanks.
答案1
得分: 2
修复这个问题的方法是设置logit bias,在这里有更详细的步骤说明here,并且使用n。
基本上,你可以使用logit bias来设置特定标记的可能性,从-100(禁止选择)到100(独占选择)。你可以通过获取标记ID并给它正确的值来实现这一点。
首先,你需要使用一个tokenizer工具对你想要禁止的标记进行处理。这需要一些思考工作,因为并不是所有的字符组合都适合一个标记。在你的情况下,"A:"是标记[32, 25],"Answer:"是[33706, 25],进一步说,"Answer"的标记是[33706],":"是[25],"A"是32。
所以你需要考虑一下你的组合,因为虽然你想禁止"Answer:"和"A:",但你可能不想禁止单词"Answer"或字母"A"。一个潜在的选择是将":"禁止并设置为-100的值,并在"Answer"上稍微加一些偏差,对"A"也加一些偏差。这可能需要一些实验,以找到正确的比例,此外,你可能会遇到其他需要禁止的东西,比如"a:","answer:","A-"等等。
一旦你进行了实验,这将帮助你禁止单词。如果你想要一些额外的缓冲/保护,你还可以添加一个大于1的n值。n允许你返回多个答案,所以你可以让它返回最好的10个答案,并逐个进行匹配。
所以基本上,你可以通过实验你的logit bias来确保"A:"和"Answer"不会出现,并且当它出现时,你可以添加一些额外的代码来帮助你过滤它。下面是一个示例:
const completion = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
logit_bias={25:-100, 33706:-10, 32,-1},
n:5,
messages: [
{
role: 'system',
content: `You are ChatbotAssistant, an automated service to answer questions of a website visitors.` +
`You respond in a short, very conversational friendly style. If you can't find an answer, provide no answer and apologize.`
},
{role: 'user', content: userQuestion}
]
})
const responseText = completion.data.choices[0].message.content
上面的代码禁止了":",对"Answer"设置了偏差,并对"A"设置了小的偏差,它给我返回了前5个结果,以防出现问题。如前所述,你需要通过实验来调整logit bias,并且可能需要添加更多的禁止标记,以应对新的"A:"变体(比如"a:")的出现。
英文:
The way to fix this would be to set the logit bias, additional walk through here and also use n.
Basically you can use logit bias to set the likihood of a specific token from -100(Banned) to 100(exclusive selection). You do this by taking the token id and giving it the correct value
What you do is first use a tokenizer tool for the token/tokens you want banned. This will require some thought work, as not all character combinations fit within one token. In your Case "A:" are tokens [32, 25] and "Answer:" is [33706, 25], taking it further "Answer" has a token of [33706], ":" is [25], and "A" is 32.
So youll want think on your combination here, because while you want to ban "Answer:' and "A:" you likely dont want to ban the word Answer or the letter A. One potental option is to ban ":" with a negative 100 value, and potentially put some bias on Answer and slight bias A. This will likely require some experimentation as you figure out the right ratios, and additionally you may come across other things you should ban like "a:", "answer:", "A-", etc.
Now that will help you ban words once you experiment. If you want some additional buffer/protection, you can also then add a value to n higher then 1. n allows you to return more then 1 answer, to you can have it send over the best 10 answers, and go through sequentially till one matches.
So basically you experiment with your logit bias till youve applied the right bias to ensure the "A:" and "Answer" Dont show up, and you can add in some additional code to help you filter it out when it does. Example below
const completion = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
logit_bias={25:-100, 33706:-10, 32,-1},
n:5,
messages: [
{
role: 'system',
content: `You are ChatbotAssistant, an automated service to answer questions of a website visitors.` +
`You respond in a short, very conversational friendly style. If you can't find an answer, provide no answer and apologize.`
},
{role: 'user', content: userQuestion}
]
})
const responseText = completion.data.choices[0].message.content
The code above bans ":" sets a bias against "Answer" and sets a small bias against "A" and gets me the top 5 results allowing me to have backups if something goes wrong. As mentioned youll need to experiment with the logit bias and you will likely need to add in more banned tokens as new variants of "A:"(like "a:") pop up.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论