英文:
Kivy app not detecting touch events when using cv.VideoCapture()
问题
我正在开发一个Kivy应用程序。在某一点上,用户将不得不触摸屏幕以指示网球场的角落在哪里。我想获取此触摸的坐标,并创建一个感兴趣区域(ROI)。我很容易使用OpenCV实现了这一点:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
cap = cv.VideoCapture('partido/06_centro.mp4')
cv.namedWindow("Video")
def get_roi(frame, x, y):
roi_corners = np.array([[x-50, y-50], [x+50, y-50], [x+50, y+50], [x-50, y+50]])
mask = np.zeros(frame.shape[:2], np.uint8)
cv.drawContours(mask, [roi_corners], 0, (255, 255, 255), -1)
return cv.bitwise_and(frame, frame, mask=mask)
def mouse_callback(event, x, y, flags, param):
if event == cv.EVENT_LBUTTONDOWN:
roi_frame = get_roi(frame, x, y)
cv.imshow("ROI Frame", roi_frame)
cv.setMouseCallback("Video", mouse_callback)
while True:
ret, frame = cap.read()
cv.imshow("Video", frame)
key = cv.waitKey(1) & 0xFF
if key == ord('q'):
break
cap.release()
cv.destroyAllWindows()
这段代码有效,但是当我尝试在Kivy中执行相同操作时,我可以显示视频,但它不识别触摸事件(print
语句不执行):
import cv2
from kivy.app import App
from kivy.uix.image import Image
from kivy.graphics.texture import Texture
from kivy.clock import Clock
import numpy as np
# 程序的其余部分没有翻译
if __name__ == '__main__':
app = VideoPlayerApp().run()
你仍然在使用OpenCV的VideoCapture()
方法,并使用schedule_interval
来更新Image组件。由于我无法提供更多Kivy代码,你可能需要检查Kivy部分的代码以确保触摸事件正确处理。你可以阅读Kivy的文档或寻求Kivy社区的支持以解决触摸事件未触发的问题。
英文:
I am working on a Kivy app. At some point, the user will have to touch the screen to indicate where the corners of a tennis court are. I want to take the coordinates of this touch, and create a ROI. I did this using OpenCV easily:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
cap = cv.VideoCapture('partido/06_centro.mp4')
cv.namedWindow("Video")
def get_roi(frame, x, y):
roi_corners = np.array([[x-50, y-50], [x+50, y-50], [x+50, y+50], [x-50, y+50]])
mask = np.zeros(frame.shape[:2], np.uint8)
cv.drawContours(mask, [roi_corners], 0, (255, 255, 255), -1)
return cv.bitwise_and(frame, frame, mask=mask)
def mouse_callback(event, x, y, flags, param):
if event == cv.EVENT_LBUTTONDOWN:
roi_frame = get_roi(frame, x, y)
cv.imshow("ROI Frame", roi_frame)
cv.setMouseCallback("Video", mouse_callback)
while True:
ret, frame = cap.read()
cv.imshow("Video", frame)
key = cv.waitKey(1) & 0xFF
if key == ord('q'):
break
cap.release()
cv.destroyAllWindows()
This code works, but now when I try to do the same with Kivy I can show the video but it doesn't recognize the touch event (the print
statement is not executed):
import cv2
from kivy.app import App
from kivy.uix.image import Image
from kivy.graphics.texture import Texture
from kivy.clock import Clock
import numpy as np
def get_roi(frame, x, y):
roi_corners = np.array([[x-50, y-50], [x+50, y-50], [x+50, y+50], [x-50, y+50]])
mask = np.zeros(frame.shape[:2], np.uint8)
cv2.drawContours(mask, [roi_corners], 0, (255, 255, 255), -1)
return cv2.bitwise_and(frame, frame, mask=mask)
class VideoPlayerApp(App):
def build(self):
self.image = Image(allow_stretch=True, keep_ratio=False)
Clock.schedule_interval(self.update, 1.0/30.0)
self.cap = cv2.VideoCapture("./partido/06_centro.mp4")
return self.image
def on_touch_down(self, touch):
print("Touch down")
if self.image.collide_point(*touch.pos):
x, y = touch.pos
x = int(x / self.image.width * self.cap.get(cv2.CAP_PROP_FRAME_WIDTH))
y = int(y / self.image.height * self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
ret, frame = self.cap.read()
if ret:
roi_frame = get_roi(frame, x, y)
cv2.imshow("ROI Frame", roi_frame)
def update(self, dt):
ret, frame = self.cap.read()
if ret:
buf = cv2.flip(frame, 0).tobytes()
image_texture = Texture.create(size=(frame.shape[1], frame.shape[0]), colorfmt='bgr')
image_texture.blit_buffer(buf, colorfmt='bgr', bufferfmt='ubyte')
self.image.texture = image_texture
def on_stop(self):
self.cap.release()
cv2.destroyAllWindows()
if __name__ == '__main__':
app = VideoPlayerApp().run()
I'm still using the method VideoCapture()
from OpenCV, and I'm updating the Image component using schedule_interval
. I started using Kivy recently, so I can't find the error. Can you help me?
答案1
得分: 2
App
类不处理 on_touch_down
事件,因此您的 on_touch_down()
方法将永远不会被调用。然而,Image
类处理 on_touch_down
事件(所有的 Widgets
都处理)。因此,您可以在 build()
方法中使用您的 Image
绑定到该事件,像这样:
self.image.bind(on_touch_down=self.on_touch_down)
并修改 on_touch_down()
方法以接受正确的参数:
def on_touch_down(self, image, touch):
print("Touch down")
if image.collide_point(*touch.pos):
x, y = touch.pos
x = int(x / image.width * self.cap.get(cv2.CAP_PROP_FRAME_WIDTH))
y = int(y / image.height * self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
ret, frame = self.cap.read()
if ret:
roi_frame = get_roi(frame, x, y)
cv2.imshow("ROI Frame", roi_frame)
请注意,在这个方法中,您可以用 image
替换 self.image
,因为它是传递给方法的参数。
英文:
The App
class does not handle on_touch_down
events, so your on_touch_down()
method will never be called. The Image
class, however, does handle on_touch_down
events (all Widgets
do). So you can bind to that event using your Image
in your build()
method, like this:
self.image.bind(on_touch_down=self.on_touch_down)
And modify that on_touch_down()
method to accept the correct arguments:
def on_touch_down(self, image, touch):
print("Touch down")
if image.collide_point(*touch.pos):
x, y = touch.pos
x = int(x / image.width * self.cap.get(cv2.CAP_PROP_FRAME_WIDTH))
y = int(y / image.height * self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
ret, frame = self.cap.read()
if ret:
roi_frame = get_roi(frame, x, y)
cv2.imshow("ROI Frame", roi_frame)
Note that in this method you can replace self.image
with just image
since it is an argument passed to the method.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论