如何为OpenAI模型分配聊天内存

huangapple go评论102阅读模式
英文:

How can I allocate chat memory to the openai model

问题

  1. class chatbot(APIView):
  2. def post(self, request):
  3. chatbot_response = None
  4. if api_key is not None and request.method == 'POST':
  5. openai.api_key = api_key
  6. user_input = request.data.get('user_input')
  7. prompt = user_input
  8. response = openai.Completion.create(
  9. model='text-davinci-003',
  10. prompt=prompt,
  11. max_tokens=250,
  12. temperature=0.5
  13. )
  14. chatbot_response = response["choices"][0]["text"]
  15. if chatbot_response is not None:
  16. return Response({"response": chatbot_response}, status=status.HTTP_200_OK)
  17. else:
  18. return Response({'errors': {'error_de_campo': ['Promt vacio']}},
  19. status=status.HTTP_404_NOT_FOUND)

我原计划创建一个模型并将问题保存到数据库中,但我不知道如何将这些信息集成到视图中,我担心会花费令牌,我不太清楚它是如何工作的。希望有人能为我澄清这些疑虑。非常感谢。

英文:

I'm working in django, I have a view where I call the openai api, and in the frontend I work with react, where I have a chatbot, I want the model to have a record of the data, like the chatgpt page does.

  1. class chatbot(APIView):
  2. def post(self, request):
  3. chatbot_response = None
  4. if api_key is not None and request.method == 'POST':
  5. openai.api_key = api_key
  6. user_input = request.data.get('user_input')
  7. prompt = user_input
  8. response = openai.Completion.create(
  9. model = 'text-davinci-003',
  10. prompt = prompt,
  11. max_tokens=250,
  12. temperature=0.5
  13. )
  14. chatbot_response = response["choices"][0]["text"]
  15. if chatbot_response is not None:
  16. return Response({"response": chatbot_response}, status=status.HTTP_200_OK)
  17. else:
  18. return Response({'errors': {'error_de_campo': ['Promt vacio']}},
  19. status=status.HTTP_404_NOT_FOUND)

I was planning to create a model and save the questions in the database, but I don't know how to integrate that information into the view, I'm worried about the token spending, I don't really know how it works. I hope someone can clarify these doubts for me. thank you so much.

答案1

得分: 0

如何为OpenAI模型分配聊天内存?

将其保存在数据库中

通过保存到数据库中,您可以永久保存历史记录
按照以下步骤进行操作

https://docs.djangoproject.com/en/4.2/topics/db/queries/

https://docs.djangoproject.com/en/4.0/intro/tutorial02/#database-setup

  1. 您需要创建一个包含用户标识和历史记录的数据库模型
  1. import jsonfield
  2. from django.db import models
  3. class ChatHistory(models.Model):
  4. owner_uid = models.CharField(max_length=100)
  5. chat = jsonfield.JSONField() # 将是{用户,机器人}交互
  6. def __str__(self):
  7. return self.name
  1. 创建一个获取历史记录的端点
  1. from .models import ChatHistory
  2. class chatbot(APIView):
  3. def post(self, request):
  4. chatbot_response = None
  5. if api_key is not None and request.method == 'POST':
  6. openai.api_key = api_key
  7. user_input = request.data.get('user_input')
  8. prompt = user_input
  9. response = openai.Completion.create(
  10. model = 'text-davinci-003',
  11. prompt = prompt,
  12. max_tokens=250,
  13. temperature=0.5
  14. )
  15. chatbot_response = response["choices"][0]["text"]
  16. if chatbot_response is not None:
  17. return Response({"response": chatbot_response}, status=status.HTTP_200_OK)
  18. else:
  19. return Response({'errors': {'error_de_campo': ['Promt vacio']}}, status=status.HTTP_404_NOT_FOUND)
  20. def get(self, request):
  21. uid = request.GET.get('uid') # 假设uid作为查询参数传递
  22. if uid is None:
  23. return Response({'errors': {'uid': ['UID参数是必需的。']}}, status=status.HTTP_400_BAD_REQUEST)
  24. chat_history = ChatHistory.objects.filter(owner_uid=uid)
  25. serialized_data = ChatHistorySerializer(chat_history, many=True).data
  26. return Response({'data': serialized_data}, status=status.HTTP_200_OK)

在前端中显示?

嗯,我不清楚您的前端代码,这只是一个JSON数据

您应该可以很容易地完成!

担心成本吗?

嗯,您不应该首先使用text-davinci-3来进行聊天机器人用途!

text-davinci-3是InstructGPT模型
Instruct模型被优化用于遵循单轮指令。Ada是最快的模型,而Davinci是最强大的(也非常昂贵)

您应该使用chatgpt-3.5-turbo的ChatCompletion

https://platform.openai.com/docs/api-reference/chat/create?lang=python

  1. import os
  2. import openai
  3. openai.api_key = os.getenv("OPENAI_API_KEY")
  4. completion = openai.ChatCompletion.create(
  5. model="gpt-3.5-turbo",
  6. messages=[
  7. {"role": "user", "content": "Hello!"}
  8. ]
  9. )
  10. print(completion.choices[0].message)

为什么?看一下价格

gpt-3.5-turbo $0.002 / 1K tokens

Davinci $0.0200 / 1K tokens

成本效益是10倍的!

英文:

How can I allocate chat memory to the openai model?

Save it in database

by saving into database, you can permanently save the history
by follow this

https://docs.djangoproject.com/en/4.2/topics/db/queries/

https://docs.djangoproject.com/en/4.0/intro/tutorial02/#database-setup

  1. you need to create a database model containing User Identification & History
  1. import jsonfield
  2. from django.db import models
  3. class ChatHistory(models.Model):
  4. owner_uid = models.CharField(max_length=100)
  5. chat = jsonfield.JSONField() # would be {USER, BOT} interaction
  6. def __str__(self):
  7. return self.name
  1. Create A Get History Endpoint
  1. from .models import ChatHistory
  2. class chatbot(APIView):
  3. def post(self, request):
  4. chatbot_response = None
  5. if api_key is not None and request.method == 'POST':
  6. openai.api_key = api_key
  7. user_input = request.data.get('user_input')
  8. prompt = user_input
  9. response = openai.Completion.create(
  10. model = 'text-davinci-003',
  11. prompt = prompt,
  12. max_tokens=250,
  13. temperature=0.5
  14. )
  15. chatbot_response = response["choices"][0]["text"]
  16. if chatbot_response is not None:
  17. return Response({"response": chatbot_response}, status=status.HTTP_200_OK)
  18. else:
  19. return Response({'errors': {'error_de_campo': ['Promt vacio']}},
  20. status=status.HTTP_404_NOT_FOUND)
  21. def get(self, request):
  22. uid = request.GET.get('uid') # Assuming the uid is passed as a query parameter
  23. if uid is None:
  24. return Response({'errors': {'uid': ['UID parameter is required.']}}, status=status.HTTP_400_BAD_REQUEST)
  25. chat_history = ChatHistory.objects.filter(owner_uid=uid)
  26. serialized_data = ChatHistorySerializer(chat_history, many=True).data
  27. return Response({'data': serialized_data}, status=status.HTTP_200_OK)

Showing it In frontend?

well idk about your front-end code it's just a json data

you should managed to do it easily!

Worry About The Cost?

Well you shouldn't use text-davinci-3 for chatbot purpose in the firstplace!

text-davinci-3 is an InstructGPT model
Instruct models are optimized to follow single-turn instructions. Ada is the fastest model, while Davinci is the most powerful(and very expensive)

you should've used ChatCompletion with chatgpt-3.5-turbo instead

https://platform.openai.com/docs/api-reference/chat/create?lang=python

  1. import os
  2. import openai
  3. openai.api_key = os.getenv("OPENAI_API_KEY")
  4. completion = openai.ChatCompletion.create(
  5. model="gpt-3.5-turbo",
  6. messages=[
  7. {"role": "user", "content": "Hello!"}
  8. ]
  9. )
  10. print(completion.choices[0].message)

Why? take a look at the pricing

gpt-3.5-turbo $0.002 / 1K tokens

Davinci $0.0200 / 1K tokens

The Cost Effectivness is 10 Times!

huangapple
  • 本文由 发表于 2023年5月30日 01:02:25
  • 转载请务必保留本文链接:https://go.coder-hub.com/76359137.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定