Error ‘Pipeline have not been created’ in python/opencv

huangapple go评论60阅读模式
英文:

Error 'Pipeline have not been created' in python/opencv

问题

我正在进行相机校准的过程,为此我正在使用Python语言以及OpenCV库。我正在使用Jetson Nano上的Waveshare IMX219相机。

我尝试使用"VideoCapture"函数捕获相机图像,将相机索引0作为参数传递以进行校准。这时出现以下问题:

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer警告:嵌入式视频播放已停止;模块v4l2src0报告:内部数据流错误。
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer警告:无法启动管道。
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer警告:GStreamer:管道尚未创建。

相机已正确连接并被设备识别。

英文:

I'm in the process of calibrating the camera, and for that I'm using the python language together with the open cv library. I'm using the Waveshare IMX219 camera on the Jetson Nano.

I tried to capture images with the cameras in order to calibrate them using the "VideoCapture" function, passing the index of camera 0 as a parameter. And that's when the following problem appears:

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

The camera is correctly connected and is being recognized by the device.

答案1

得分: 1

以下是翻译好的部分:

"Passing index 0 would use either V4L backend, or v4l2src plugin for gstreamer backend (your case is the latter). The problem is that IMX219 is a bayer RG10 sensor, its raw video is not suitable for opencv that expects BGR format for most algorithms (other formats are available, though, depending on your opencv version)."

"使用索引 0 将使用 V4L 后端或 GStreamer 后端的 v4l2src 插件(在您的情况下是后者)。问题在于,IMX219 是 Bayer RG10 传感器,其原始视频对于 OpenCV 来说不适合,因为 OpenCV 大多数算法需要 BGR 格式(根据您的 OpenCV 版本,也可以使用其他格式)。"

"With a Jetson the path would be using Argus that would debayer, auto-tune gains, exposure, wb, ... with ISP and provide NV12 format frames into NVMM memory where it can be handled in gstreamer thanks to plugin nvarguscamerasrc."

"在 Jetson 上,路径将使用 Argus,它会解码、自动调整增益、曝光、白平衡等,使用 ISP 并将 NV12 格式帧提供到 NVMM 存储器中,然后可以在 GStreamer 中使用 nvarguscamerasrc 插件进行处理。"

"For providing BGR frames to opencv application, you may first use Jetson's VIC HW for converting into BGRx format with nvvidconv outputting into system memory, and then use videoconvert for BGR. So the pipeline would be:"

"要向 OpenCV 应用程序提供 BGR 帧,您可以首先使用 Jetson 的 VIC 硬件将其转换为 BGRx 格式,使用 nvvidconv 输出到系统内存,然后使用 videoconvert 进行 BGR 转换。因此,管道将如下所示:"

(cam_pipeline_str 部分代码未翻译,如需翻译请提供具体代码)

"Also note that default support of some Jetsons L4T versions as device-tree/drivers would be for RPi v2 IMX219 cam. I have no experience with Waveshare cams, but vendors should provide a SDK for their products. Check with your camera vendor for your L4T release."

"还要注意,某些 Jetsons L4T 版本的默认支持是针对树莓派 v2 IMX219 摄像头的设备树/驱动程序。我对 Waveshare 摄像头没有经验,但供应商应该为其产品提供 SDK。请咨询您的摄像头供应商以获取适用于您的 L4T 版本的信息。"

英文:

Passing index 0 would use either V4L backend, or v4l2src plugin for gstreamer backend (your case is the latter). The problem is that IMX219 is a bayer RG10 sensor, its raw video is not suitable for opencv that expects BGR format for most algorithms (other formats are available, though, depending on your opencv version).

With a Jetson the path would be using Argus that would debayer, auto-tune gains, exposure, wb, ... with ISP and provide NV12 format frames into NVMM memory where it can be handled in gstreamer thanks to plugin nvarguscamerasrc.

For providing BGR frames to opencv application, you may first use Jetson's VIC HW for converting into BGRx format with nvvidconv outputting into system memory, and then use videoconvert for BGR. So the pipeline would be:

cam_pipeline_str = 'nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM),format=NV12,width=1280,height=720,framerate=30/1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1'
cap = cv2.VideoCapture(cam_pipeline_str, cv2.CAP_GSTREAMER)
if not cap.isOpened():
   ...Error

# if you get here loop reading frames and do what you want with these...

Also note that default support of some Jetsons L4T versions as device-tree/drivers would be for RPi v2 IMX219 cam. I have no experience with Waveshare cams, but vendors should provide a SDK for their products. Check with your camera vendor for your L4T release.

huangapple
  • 本文由 发表于 2023年2月16日 02:03:19
  • 转载请务必保留本文链接:https://go.coder-hub.com/75463789.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定