英文:
How can I allocate chat memory to the openai model
问题
class chatbot(APIView):
def post(self, request):
chatbot_response = None
if api_key is not None and request.method == 'POST':
openai.api_key = api_key
user_input = request.data.get('user_input')
prompt = user_input
response = openai.Completion.create(
model='text-davinci-003',
prompt=prompt,
max_tokens=250,
temperature=0.5
)
chatbot_response = response["choices"][0]["text"]
if chatbot_response is not None:
return Response({"response": chatbot_response}, status=status.HTTP_200_OK)
else:
return Response({'errors': {'error_de_campo': ['Promt vacio']}},
status=status.HTTP_404_NOT_FOUND)
我原计划创建一个模型并将问题保存到数据库中,但我不知道如何将这些信息集成到视图中,我担心会花费令牌,我不太清楚它是如何工作的。希望有人能为我澄清这些疑虑。非常感谢。
英文:
I'm working in django, I have a view where I call the openai api, and in the frontend I work with react, where I have a chatbot, I want the model to have a record of the data, like the chatgpt page does.
class chatbot(APIView):
def post(self, request):
chatbot_response = None
if api_key is not None and request.method == 'POST':
openai.api_key = api_key
user_input = request.data.get('user_input')
prompt = user_input
response = openai.Completion.create(
model = 'text-davinci-003',
prompt = prompt,
max_tokens=250,
temperature=0.5
)
chatbot_response = response["choices"][0]["text"]
if chatbot_response is not None:
return Response({"response": chatbot_response}, status=status.HTTP_200_OK)
else:
return Response({'errors': {'error_de_campo': ['Promt vacio']}},
status=status.HTTP_404_NOT_FOUND)
I was planning to create a model and save the questions in the database, but I don't know how to integrate that information into the view, I'm worried about the token spending, I don't really know how it works. I hope someone can clarify these doubts for me. thank you so much.
答案1
得分: 0
如何为OpenAI模型分配聊天内存?
将其保存在数据库中
通过保存到数据库中,您可以永久保存历史记录
按照以下步骤进行操作
https://docs.djangoproject.com/en/4.2/topics/db/queries/
https://docs.djangoproject.com/en/4.0/intro/tutorial02/#database-setup
- 您需要创建一个包含用户标识和历史记录的数据库模型
import jsonfield
from django.db import models
class ChatHistory(models.Model):
owner_uid = models.CharField(max_length=100)
chat = jsonfield.JSONField() # 将是{用户,机器人}交互
def __str__(self):
return self.name
- 创建一个获取历史记录的端点
from .models import ChatHistory
class chatbot(APIView):
def post(self, request):
chatbot_response = None
if api_key is not None and request.method == 'POST':
openai.api_key = api_key
user_input = request.data.get('user_input')
prompt = user_input
response = openai.Completion.create(
model = 'text-davinci-003',
prompt = prompt,
max_tokens=250,
temperature=0.5
)
chatbot_response = response["choices"][0]["text"]
if chatbot_response is not None:
return Response({"response": chatbot_response}, status=status.HTTP_200_OK)
else:
return Response({'errors': {'error_de_campo': ['Promt vacio']}}, status=status.HTTP_404_NOT_FOUND)
def get(self, request):
uid = request.GET.get('uid') # 假设uid作为查询参数传递
if uid is None:
return Response({'errors': {'uid': ['UID参数是必需的。']}}, status=status.HTTP_400_BAD_REQUEST)
chat_history = ChatHistory.objects.filter(owner_uid=uid)
serialized_data = ChatHistorySerializer(chat_history, many=True).data
return Response({'data': serialized_data}, status=status.HTTP_200_OK)
在前端中显示?
嗯,我不清楚您的前端代码,这只是一个JSON数据
您应该可以很容易地完成!
担心成本吗?
嗯,您不应该首先使用text-davinci-3来进行聊天机器人用途!
text-davinci-3是InstructGPT模型
Instruct模型被优化用于遵循单轮指令。Ada是最快的模型,而Davinci是最强大的(也非常昂贵)
您应该使用chatgpt-3.5-turbo的ChatCompletion
https://platform.openai.com/docs/api-reference/chat/create?lang=python
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
为什么?看一下价格
gpt-3.5-turbo $0.002 / 1K tokens
Davinci $0.0200 / 1K tokens
成本效益是10倍的!
英文:
How can I allocate chat memory to the openai model?
Save it in database
by saving into database, you can permanently save the history
by follow this
https://docs.djangoproject.com/en/4.2/topics/db/queries/
https://docs.djangoproject.com/en/4.0/intro/tutorial02/#database-setup
- you need to create a database model containing User Identification & History
import jsonfield
from django.db import models
class ChatHistory(models.Model):
owner_uid = models.CharField(max_length=100)
chat = jsonfield.JSONField() # would be {USER, BOT} interaction
def __str__(self):
return self.name
- Create A Get History Endpoint
from .models import ChatHistory
class chatbot(APIView):
def post(self, request):
chatbot_response = None
if api_key is not None and request.method == 'POST':
openai.api_key = api_key
user_input = request.data.get('user_input')
prompt = user_input
response = openai.Completion.create(
model = 'text-davinci-003',
prompt = prompt,
max_tokens=250,
temperature=0.5
)
chatbot_response = response["choices"][0]["text"]
if chatbot_response is not None:
return Response({"response": chatbot_response}, status=status.HTTP_200_OK)
else:
return Response({'errors': {'error_de_campo': ['Promt vacio']}},
status=status.HTTP_404_NOT_FOUND)
def get(self, request):
uid = request.GET.get('uid') # Assuming the uid is passed as a query parameter
if uid is None:
return Response({'errors': {'uid': ['UID parameter is required.']}}, status=status.HTTP_400_BAD_REQUEST)
chat_history = ChatHistory.objects.filter(owner_uid=uid)
serialized_data = ChatHistorySerializer(chat_history, many=True).data
return Response({'data': serialized_data}, status=status.HTTP_200_OK)
Showing it In frontend?
well idk about your front-end code it's just a json data
you should managed to do it easily!
Worry About The Cost?
Well you shouldn't use text-davinci-3 for chatbot purpose in the firstplace!
text-davinci-3 is an InstructGPT model
Instruct models are optimized to follow single-turn instructions. Ada is the fastest model, while Davinci is the most powerful(and very expensive)
you should've used ChatCompletion with chatgpt-3.5-turbo instead
https://platform.openai.com/docs/api-reference/chat/create?lang=python
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
Why? take a look at the pricing
gpt-3.5-turbo $0.002 / 1K tokens
Davinci $0.0200 / 1K tokens
The Cost Effectivness is 10 Times!
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论