如何在发出 API 调用之前计算标记数?

huangapple go评论54阅读模式
英文:

how can I count tokens before making api call?

问题

Here is the translated code without the parts you mentioned:

import { Configuration, OpenAIApi } from "openai";
import { readFile } from './readFile.js';

// 配置 OpenAI API
const configuration = new Configuration({
    organization: "xyx......",
    apiKey: "abc.......",
});

// OpenAI API 实例
export const openai = new OpenAIApi(configuration);

const generateAnswer = async (conversation, userMessage) => {
    try {
        const dataset = await readFile();
        const dataFeed = { role: 'system', content: dataset };
        const prompt = conversation ? [...conversation?.messages, dataFeed, userMessage] : [dataFeed, userMessage];
        const completion = await openai.createChatCompletion({
            model: "gpt-3.5-turbo",
            messages: prompt
        });

        const aiMessage = completion.data.choices[0].message;
        console.log(completion.data.usage);
        return aiMessage;
    } catch (e) {
        console.log(e);
    }
}
export { generateAnswer };

If you have any specific questions or need further assistance, feel free to ask.

英文:
import { Configuration, OpenAIApi } from "openai"
import { readFile } from './readFile.js'

// Config OpenAI API
const configuration = new Configuration({
    organization: "xyx......",
    apiKey: "abc.......",
});

// OpenAI API instance
export const openai = new OpenAIApi(configuration);


const generateAnswer = async (conversation, userMessage) => {
    try {
        const dataset = await readFile();
        const dataFeed = { role: 'system', content: dataset };
        const prompt = conversation ? [...conversation?.messages, dataFeed, userMessage] : [dataFeed, userMessage];
        const completion = await openai.createChatCompletion({
            model: "gpt-3.5-turbo",
            messages: prompt
        })

        const aiMessage = completion.data.choices[0].message;
        console.log(completion.data.usage)
        return aiMessage
    } catch (e) {
        console.log(e)
    }
}
export { generateAnswer };

I am trying to create chat bot, in which I provide datafeed in start which is business information and conversation history to chat api
I want to calculate tokens of conversation and reduce prompt if exceeds limit before making api call
I have tried using gpt3 encoder to count tokens but i have array of objects not string in prompt

答案1

得分: 8

精确方法

一种精确的方法是使用tiktoken,这是一个Python库。从openAI cookbook中获取的示例:

    import tiktoken
    encoding = tiktoken.encoding_for_model("gpt-3.5-turbo")
    num_tokens = len(encoding.encode("Look at all them pretty tokens"))
    print(num_tokens)

更一般地,您可以使用

encoding = tiktoken.get_encoding("cl100k_base")

在此处 cl100k_base 用于 gpt-4、gpt-3.5-turbo、text-embedding-ada-002;
p50k_base 用于 Codex 模型、text-davinci-002、text-davinci-003;
r50k_base 是用于 gpt2 和像 davinci 这样的 GPT-3 模型的。通常情况下,r50k_basep50k_base 得到的结果是相似的。

近似方法

通常情况下,您只需确保程序不会因为超过标记限制而崩溃,只需使用字符计数截断以确保不会超过标记限制。通过使用tiktoken进行测试,发现标记数量通常是线性的,尤其是在较新的模型中,并且1/e似乎是一个稳健的比例常数。因此,我们可以编写一个用于将标记保守地关联到字符的简单方程:

'#标记数 <?#字符 *(1/e)+ 安全裕度'

其中 <? 意味着这很可能是真的,而1/e = 0.36787944117144232159552377016146。在某些情况下,当使用r50k_base时,对于2000个字符后,安全裕度需要为8。安全裕度出现的两种情况是:首先是字符计数非常低的情况;在所有模型中,2的值足够且需要。其次是如果模型无法理解它正在查看的内容,导致字符计数和标记数量之间的关系不稳定/嘈杂,比例常数更接近1/e,可能会超过1/e限制。

主要近似结果

现在反过来得到一个最大的字符数,以适应标记限制:

'最大字符数 =(#标记限制 - 安全裕度)* e'

其中 e = 2.7182818284590... 现在您有了一个瞬时、与语言和平台无关、无依赖性的解决方案,以确保不会超过标记限制。

展示您的工作

使用一段英文

对于使用英文文本的cl100k_base模型,#标记数 = #字符0.2016568976249748 + -5.277472848558375
对于使用英文文本的p50k_base模型,#标记数 = #字符
0.20820463015644564 + -4.697668008159241
对于使用英文文本的r50k_base模型,#标记数 = #字符*0.20820463015644564 + -4.697668008159241

如何在发出 API 调用之前计算标记数?
如何在发出 API 调用之前计算标记数?

使用一段Lorem ipsum文本

对于使用Lorem ipsum文本的cl100k_base模型,#标记数 = #字符0.325712437966849 + -5.186204883743613
对于使用Lorem ipsum文本的p50k_base模型,#标记数 = #字符
0.3622005352481815 + 2.4256199405020595
对于使用Lorem ipsum文本的r50k_base模型,#标记数 = #字符*0.3622005352481815 + 2.4256199405020595

如何在发出 API 调用之前计算标记数?
如何在发出 API 调用之前计算标记数?

使用一段Python代码

对于使用sampletext2的cl100k_base模型,#标记数 = #字符0.2658446137873485 + -0.9057612056294033
对于使用sampletext2的p50k_base模型,#标记数 = #字符
0.3240730228908291 + -5.740016444496973
对于使用sampletext2的r50k_base模型,#标记数 = #字符*0.3754121847018643 + -19.96012603693265

如何在发出 API 调用之前计算标记数?
如何在发出 API 调用之前计算标记数?

英文:

Exact Method

A precise way is to use tiktoken, which is a python library. Taken from the openAI cookbook:

    import tiktoken
    encoding = tiktoken.encoding_for_model(&quot;gpt-3.5-turbo&quot;)
    num_tokens = len(encoding.encode(&quot;Look at all them pretty tokens&quot;))
    print(num_tokens)

More generally, you can use

encoding = tiktoken.get_encoding(&quot;cl100k_base&quot;)

where cl100k_base is used in gpt-4, gpt-3.5-turbo, text-embedding-ada-002;
p50k_base is used in Codex models, text-davinci-002, text-davinci-003; and r50k_base is what's used in gpt2, and GPT-3 models like davinci. r50k_base and p50k_base and often (but not always) gives the same results.

Approximation Method

You usually just want you program to not crash due to exceeding the token limit, and just need a character count cutoff such that you won't exceed the token limit. Testing with tiktoken reveals that token count is usually linear, particularly with newer models, and that 1/e seems to be a robust conservative constant of proportionality. So, we can write a trivial equation for conservatively relating tokens to characters:

'#tokens <? #characters * (1/e) + safety_margin'

where <? means this is very likely true, and 1/e = 0.36787944117144232159552377016146.
an adaquate choice for safety_margin seems to be 2. In some cases when using with r50k_base this needed to be 8 after 2000 characters. There are two cases where the safety margin comes into play: first for very low character count; there a value of 2 is enough and needed for all models. Second is if the model fails to reason about what it's looking at, resulting in a wobbly/noisy relationship between character count and # tokens with a constant of proportionality closer to 1/e, that may meander over the 1/e limit.

Main Approximation Result

Now reverse this to get a maximum number of characters to fit within a token limit:

'max_characters = (#tokens_limit - safety_margin) * e'

where e = 2.7182818284590... Now you've got an instant, language and platform independent, and dependency-free solution for not exceeding the token limit.

Show Your Work

With a paragraph of English

For model cl100k_base with English text, #tokens = #chars0.2016568976249748 + -5.277472848558375
For model p50k_base with English text, #tokens = #chars
0.20820463015644564 + -4.697668008159241
For model r50k_base with English text, #tokens = #chars*0.20820463015644564 + -4.697668008159241

如何在发出 API 调用之前计算标记数?
如何在发出 API 调用之前计算标记数?

With a paragraph of Lorem ipsum

For model cl100k_base with Lorem ipsum, #tokens = #chars0.325712437966849 + -5.186204883743613
For model p50k_base with Lorem ipsum, #tokens = #chars
0.3622005352481815 + 2.4256199405020595
For model r50k_base with Lorem ipsum, #tokens = #chars*0.3622005352481815 + 2.4256199405020595

如何在发出 API 调用之前计算标记数?
如何在发出 API 调用之前计算标记数?

With a paragraph of python code:

For model cl100k_base with sampletext2, #tokens = #chars0.2658446137873485 + -0.9057612056294033
For model p50k_base with sampletext2, #tokens = #chars
0.3240730228908291 + -5.740016444496973
For model r50k_base with sampletext2, #tokens = #chars*0.3754121847018643 + -19.96012603693265

如何在发出 API 调用之前计算标记数?
如何在发出 API 调用之前计算标记数?

答案2

得分: 5

以下是您要翻译的内容:

"Question is old but it may help someone. There is a node.js library called tiktoken wish is a fork form the original tiktoken library.

All the examples on the official tiktoken repo are valid with a small changes.

install the tiktoken npm package:

npm install @dqbd/tiktoken

calculate the number of tokens in a text string

import { encoding_for_model } from &quot;@dqbd/tiktoken&quot;;

//Returns the number of tokens in a text string
function numTokensFromString(message: string) {
  const encoder = encoding_for_model(&quot;gpt-3.5-turbo&quot;);

  const tokens = encoder.encode(message);
  encoder.free();
  return tokens.length;
}

decode the tokens back to string

import { encoding_for_model } from &quot;@dqbd/tiktoken&quot;;

 function decodeTokens(message: Uint32Array) {
  const encoder = encoding_for_model(&quot;gpt-3.5-turbo&quot;);

  const words = encoder.decode(message);
  encoder.free();
  return new TextDecoder().decode(words);
}
```"

<details>
<summary>英文:</summary>

Question is old but it may help someone. There is a node.js library called [tiktoken](https://www.npmjs.com/package/@dqbd/tiktoken) wish is a fork form  the original tiktoken library.

All the [examples](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) on the official tiktoken repo are valid with a small changes.

install the tiktoken npm package:
```sh
npm install @dqbd/tiktoken

calculate the number of tokens in a text string

import { encoding_for_model } from &quot;@dqbd/tiktoken&quot;;

//Returns the number of tokens in a text string
function numTokensFromString(message: string) {
  const encoder = encoding_for_model(&quot;gpt-3.5-turbo&quot;);

  const tokens = encoder.encode(message);
  encoder.free();
  return tokens.length;
}

decode the tokens back to string

import { encoding_for_model } from &quot;@dqbd/tiktoken&quot;;

 function decodeTokens(message: Uint32Array) {
  const encoder = encoding_for_model(&quot;gpt-3.5-turbo&quot;);

  const words = encoder.decode(message);
  encoder.free();
  return new TextDecoder().decode(words);
}

huangapple
  • 本文由 发表于 2023年5月10日 15:54:54
  • 转载请务必保留本文链接:https://go.coder-hub.com/76216113.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定