实时目标检测使用Yolo模型无法正常工作。

huangapple go评论117阅读模式
英文:

Real time object detection with Yolo model not working

问题

我已经训练了一个自定义的Yolo模型,用于检测棋盘上的方形插槽,并且在图像上的准确率超过95%。

但是一旦我切换到视频检测,似乎就无法检测到任何东西。

我正在使用以下代码进行实时目标检测:

cap = cv2.VideoCapture('../video/1st/output.mp4')
while cap.isOpened():
    ret, frame = cap.read()

    results = model(frame)
    final_img = np.squeeze(results.render())

    cv2.imshow("YOLO", final_img)
    if cv2.waitKey(10) & 0XFF == ord("q"):
        break
cap.release()
cv2.destroyAllWindows()

我使用以下代码加载模型:

model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5/runs/train/exp36/weights/best.pt", force_reload=True)
model.conf = 0.6

我甚至尝试将可用的视频拆分为JPEG图像,对每个图像运行模型,保存输出,然后将输出图像合并为新的视频文件。

这种方法完美地工作,所以模型是在检测某些内容。

但是一旦我切换到视频,似乎就无法检测到任何东西。

英文:

I have trained a custom yolo model to detect square slots on a board, and is working with more than 95 % accuracy on images.

But as soon as I switch to video detection it seems to not detect even a single thing

I am using the following code to run real time object detection

cap = cv2.VideoCapture('../video/1st/output.mp4')
while cap.isOpened():
    ret, frame = cap.read()

    results = model(frame)
    final_img = np.squeeze(results.render())


    cv2.imshow("YOLO", final_img)
    if cv2.waitKey(10) & 0XFF == ord("q"):
        break
cap.release()
cv2.destroyAllWindows()

I load the model using this code

model = torch.hub.load("ultralytics/yolov5", "custom", path="yolov5/runs/train/exp36/weights/best.pt",force_reload=True)
model.conf = 0.6

I have even tried splitting the available video into jpegs and running the model on individual images, saving the output and then merging the output images into a new video file.

that works perfectly, so the model is detecting something.

but as soon as I switch to video it seems to go back to nothing.

答案1

得分: 0

是的,由于results.renders()返回的是None类型的对象,所以你看不到任何东西。你可以按照以下方式更改代码:

cap = cv2.VideoCapture('../video/1st/output.mp4')
while cap.isOpened():
    ret, frame = cap.read()

    results = model(frame)
    bboxes = results.xyxy[0].cpu().tolist()
    for bbox in bboxes:
        conf = f'{bbox[4]:.4f}' # 预测的置信度
        bbox = list(map(lambda x: int(x), bbox)) # 将浮点数转换为整数
        class_id = bbox[5] # 类别ID
        bbox = bbox[:4] 
        cv2.rectangle(frame, (bbox[0], bbox[1]), (bbox[2], bbox[3]), color=(255, 255, 255), thickness=3)
    cv2.imshow("YOLO", frame)
    if cv2.waitKey(10) & 0xFF == ord("q"):
        break
cap.release()
cv2.destroyAllWindows()

并将帧写入视频中,完整的代码应如下所示:

input_video_path = # 输入视频路径
cap = cv2.VideoCapture(input_video_path)
fps = int(cap.get(cv2.CAP_PROP_FPS))
frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

output_video_path = 'output_video.mp4' # 输出视频路径
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter(output_video_path, fourcc, fps, (frame_width, frame_height))

model = # 在这里加载你的模型
while True:
    ret, frame = cap.read()
    if not ret:
        break
    results = model(frame)
    bboxes = results.xyxy[0].cpu().tolist()
    for bbox in bboxes:
        conf = f'{bbox[4]:.4f}' # 预测的置信度
        bbox = list(map(lambda x: int(x), bbox)) # 将浮点数转换为整数
        class_id = bbox[5] # 类别ID
        bbox = bbox[:4] 
        cv2.rectangle(frame, (bbox[0], bbox[1]), (bbox[2], bbox[3]), color=(255, 255, 255), thickness=3)
    out.write(frame)
    #cv2_imshow(frame)
    if cv2.waitKey(10) & 0xFF == ord("q"):
        break

cap.release()
out.release()

# 关闭所有OpenCV窗口
cv2.destroyAllWindows()

参考资料:
https://docs.ultralytics.com/
https://docs.ultralytics.com/yolov5/
https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/

英文:

Yes, You cannot see anything due to the none type of object return in the results.renders()/
You can change the code script like this

cap = cv2.VideoCapture('../video/1st/output.mp4')
while cap.isOpened():
    ret, frame = cap.read()

    results = model(frame)
    bboxes = results.xyxy[0].cpu().tolist()
    for bbox in bboxes:
      conf = f'{bbox[4]:.4f}' #Confidance of that prediction
      bbox = list(map(lambda x: int(x), bbox)) #To convert float to integer
      class_id = bbox[5] #Class_id 
      bbox =bbox[:4] 
      cv2.rectangle(frame,(bbox[0],bbox[1]),(bbox[2],bbox[3]),color=(255,255,255),thickness=3)
    cv2.imshow("YOLO", frame)
    if cv2.waitKey(10) & 0XFF == ord("q"):
        break
cap.release()
cv2.destroyAllWindows()

and write the frames in the video
the full code should look like this

input_video_path = #Enter your video path
cap = cv2.VideoCapture(input_video_path)
fps = int(cap.get(cv2.CAP_PROP_FPS))
frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

output_video_path = 'output_video.mp4' # Output video path
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter(output_video_path, fourcc, fps, (frame_width, frame_height))

model = #Here load your model
while True:
    ret, frame = cap.read()
    if not ret:
      break
    results = model(frame)
    bboxes = results.xyxy[0].cpu().tolist()
    for bbox in bboxes:
      conf = f'{bbox[4]:.4f}' #Confidance of that prediction
      bbox = list(map(lambda x: int(x), bbox)) #To convert float to integer
      class_id = bbox[5] #Class_id 
      bbox =bbox[:4] 
      cv2.rectangle(frame,(bbox[0],bbox[1]),(bbox[2],bbox[3]),color=(255,255,255),thickness=3)
    out.write(frame)
    #cv2_imshow(frame)
    if cv2.waitKey(10) & 0XFF == ord("q"):
        break

cap.release()
out.release()

# Close all OpenCV windows
cv2.destroyAllWindows()

References:-
[https://docs.ultralytics.com/][1] <br>
[https://docs.ultralytics.com/yolov5/][2] <br>
[https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/][3]<br>

huangapple
  • 本文由 发表于 2023年8月9日 14:58:17
  • 转载请务必保留本文链接:https://go.coder-hub.com/76865316.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定