如何将OpenAI流响应从Nextjs API发送到客户端

huangapple go评论65阅读模式
英文:

How to send OpenAI stream response from Nextjs API to client

问题

I tried openai-streams + nextjs-openai, they only works for Node 18+, however, they failed on Node 17 and lower. I'm restricted to Node 17 and lower as Digital Oceans App Platform currently not supporting Node 18.

我尝试了 openai-streams + nextjs-openai,它们只适用于 Node 18+,但在 Node 17 及以下版本上失败。由于 Digital Ocean App Platform 目前不支持 Node 18,我只能使用 Node 17 及以下版本。

I also tried this method which works well on the client side, but it exposes the API key. I want to implement it within the NextJS API route, but I'm unable to pass the streaming response to the client.

我还尝试了 这种方法,它在客户端上效果很好,但会暴露 API 密钥。我想在 NextJS API 路由中实现它,但无法将流式响应传递给客户端。

With the code below, I can only get the first chunk of response from the API route, and not able to get the streaming response to have the ChatGPT effect. Please kindly help.

使用下面的代码,我只能获取 API 路由的第一个响应块,无法获取流式响应以实现 ChatGPT 效果。请帮忙看一下。

// /api/prompt.js

import { Configuration, OpenAIApi } from "openai";
import { Readable } from "readable-stream";

const configuration = new Configuration({
  apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

export default async function handler(req, res) {
  const completion = await openai.createCompletion(
    {
      model: "text-davinci-003",
      prompt: "tell me a story",
      max_tokens: 500,
      stream: true,
    },
    { responseType: "stream" }
  );

  completion.data.on("data", async (data) => {
    const lines = data
      .toString()
      .split("\n")
      .filter((line) => line.trim() !== "");

    for (const line of lines) {
      const message = line.replace(/^data: /, "");
      if (message === "[DONE]") {
        return;
      }
      try {
        const parsed = JSON.parse(message);
        const string = parsed.choices[0].text;
        Readable.from(string).pipe(res);
      } catch (error) {
        console.error("Could not JSON parse stream message", message, error);
      }
    }
  });
}
// /components/Completion.js

export default function Completion() {
  const [text, setText] = useState();

  const generate = async () => {
    const response = await fetch("/api/prompt");
    console.log("response: ", response);
    const text = await response.text();
    console.log("text: ", text);
    setText((state) => state + text);
  };
  
  // ... rest
}
英文:

I tried openai-streams + nextjs-openai, they only works for Node 18+, however, they failed on Node 17 and lower. I'm restricted to Node 17 and lower as Digital Oceans App Platform currently not supporting Node 18.

I also tried this method which works well on client side, but it exposes the API key. I want to implement within the NextJS API route, but I'm unable to pass the streaming response to the client.

With the code below, I can only get the first chunk of response from the API route, and not able to get the streaming response to have the ChatGPT effect. Please kindly help.

<!-- begin snippet: js hide: false console: true babel: false -->

<!-- language: lang-js -->

// /api/prompt.js

import { Configuration, OpenAIApi } from &quot;openai&quot;;
import { Readable } from &quot;readable-stream&quot;;

const configuration = new Configuration({
  apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

export default async function handler(req, res) {
  const completion = await openai.createCompletion(
    {
      model: &quot;text-davinci-003&quot;,
      prompt: &quot;tell me a story&quot;,
      max_tokens: 500,
      stream: true,
    },
    { responseType: &quot;stream&quot; }
  );

  completion.data.on(&quot;data&quot;, async (data) =&gt; {
    const lines = data
      .toString()
      .split(&quot;\n&quot;)
      .filter((line) =&gt; line.trim() !== &quot;&quot;);

    for (const line of lines) {
      const message = line.replace(/^data: /, &quot;&quot;);
      if (message === &quot;[DONE]&quot;) {
        return;
      }
      try {
        const parsed = JSON.parse(message);
        const string = parsed.choices[0].text;
        Readable.from(string).pipe(res);
      } catch (error) {
        console.error(&quot;Could not JSON parse stream message&quot;, message, error);
      }
    }
  });

<!-- end snippet -->

<!-- begin snippet: js hide: false console: true babel: false -->

<!-- language: lang-js -->

// /components/Completion.js

export default function Completion() {
  const [text, setText] = useState();

  const generate = async () =&gt; {
    const response = await fetch(&quot;/api/prompt&quot;);
    console.log(&quot;response: &quot;, response);
    const text = await response.text();
    console.log(&quot;text: &quot;, text);
    setText((state) =&gt; state + text);
  };
  
  // ... rest
}

<!-- end snippet -->

答案1

得分: 1

你可以使用vercel AI sdkStreamingTextResponse

以下是一些来自他们文档的示例代码:

import { Configuration, OpenAIApi } from 'openai-edge'
import { OpenAIStream, StreamingTextResponse } from 'ai'

const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)

export const runtime = 'edge'

export async function POST(req) {
  const { messages } = await req.json()
  const response = await openai.createChatCompletion({
    model: 'gpt-4',
    stream: true,
    messages
  })
  const stream = OpenAIStream(response)
  return new StreamingTextResponse(stream)
}

注意:

  1. 这需要使用Edge Runtime
  2. openai-edge库要求使用Node 18+。然而,官方的openai库很快就会支持流式传输,因此这个库可能会被弃用。
英文:

Well you can use StreamingTextResponse
of vercel AI sdk

Attached is some example code form their docs

import { Configuration, OpenAIApi } from &#39;openai-edge&#39;
import { OpenAIStream, StreamingTextResponse } from &#39;ai&#39;

const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)

export const runtime = &#39;edge&#39;

export async function POST(req) {
  const { messages } = await req.json()
  const response = await openai.createChatCompletion({
    model: &#39;gpt-4&#39;,
    stream: true,
    messages
  })
  const stream = OpenAIStream(response)
  return new StreamingTextResponse(stream)
}

Notes:

  1. This requires the Edge Runtime
  2. The library openai-edge requires Node 18+. However, this is expected to be deprecated soon with the support for streaming in the official openai library.

答案2

得分: 0

你可以在 Node <18 上使用 openai-streams/node 入口,它将返回一个 Node.js Readable,而不是 WHATWG ReadableStream。我会尽快更新文档以使其更清晰。

Node:在Next.js API路由中消耗流

如果你无法使用 Edge 运行时,或者出于其他原因想要消耗 Node.js 流,请使用 openai-streams/node

import type { NextApiRequest, NextApiResponse } from "next";
import { OpenAI } from "openai-streams/node";

export default async function test(_: NextApiRequest, res: NextApiResponse) {
  const stream = await OpenAI("completions", {
    model: "text-davinci-003",
    prompt: "写一个快乐的句子。\n\n",
    max_tokens: 25,
  });

  stream.pipe(res);
}
英文:

You can use the openai-streams/node entrypoint on Node <18, which will return a Node.js Readable instead of a WHATWG ReadableStream. I'll update the docs to be clearer soon.


Node: Consuming streams in Next.js API Route

If you cannot use an Edge runtime or want to consume Node.js streams for another reason, use openai-streams/node:

import type { NextApiRequest, NextApiResponse } from &quot;next&quot;;
import { OpenAI } from &quot;openai-streams/node&quot;;

export default async function test(_: NextApiRequest, res: NextApiResponse) {
  const stream = await OpenAI(&quot;completions&quot;, {
    model: &quot;text-davinci-003&quot;,
    prompt: &quot;Write a happy sentence.\n\n&quot;,
    max_tokens: 25,
  });

  stream.pipe(res);
}

huangapple
  • 本文由 发表于 2023年5月17日 22:15:59
  • 转载请务必保留本文链接:https://go.coder-hub.com/76273091.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定