英文:
Create RTSP stream of a webcam using opencv in C++
问题
I am trying to capture video from my webcam and send it via RTSP stream using OpenCV in C++. 我尝试使用C++中的OpenCV从我的网络摄像头捕获视频并通过RTSP流发送它。 I am not worked much on C++ before, so please avoid mistakes in my code below. 我以前没有在C++上工作过,所以请避免下面的代码中的错误。
cv::VideoWriter virtualWebcam;
HRESULT hr = CoInitialize(NULL);
if (SUCCEEDED(hr)) {
virtualWebcam.open(
"./file.avi", cv::CAP_ANY,
cv::VideoWriter::fourcc('M', 'J', 'P', 'G'),
camera_frame_rate,
cv::Size(static_cast<int>(camera_frame_width), static_cast<int>(camera_frame_height)),
true);
if (!virtualWebcam.isOpened()) {
cerr << "Error opening virtual webcam\n";
return 1;
}
} else {
cerr << "Error initializing COM library for virtual webcam\n";
return 1;
}
I also tried doing something like: 我还尝试了如下内容:
virtualWebcam.open("rtsp://localhost:8554/stream", cv::CAP_FFMPEG, cv::VideoWriter::fourcc('M', 'J', 'P', 'G'), camera_frame_rate, cv::Size(static_cast<int>(camera_frame_width), static_cast<int>(camera_frame_height)), true);
and tried cv::CAP_GSTREAMER
, but this is also not working. 以及尝试了cv::CAP_GSTREAMER
,但这也没有起作用。
Any help is much appreciated. Thanks. 非常感谢任何帮助。我想将网络摄像头视频流发送到RTSP服务器/从这里创建一个RTSP服务器并将流发送到它。
英文:
I am trying to capture video from my webcam and send it to via RTSP stream using open cv in C++.I am not worked much on c++ before so please avoid mistakes their below is my code that writes webcam stream to a file but I want to stream it to a RTSP server.
cv::VideoWriter virtualWebcam;
HRESULT hr = CoInitialize(NULL);
if (SUCCEEDED(hr)) {
virtualWebcam.open(
"./file.avi", cv::CAP_ANY,
cv::VideoWriter::fourcc('M', 'J', 'P', 'G'),
camera_frame_rate,
cv::Size(static_cast < int > (camera_frame_width), static_cast < int > (camera_frame_height)),
true);
if (!virtualWebcam.isOpened()) {
cerr << "Error opening virtual webcam\n";
return 1;
}
} else {
cerr << "Error initializing COM library for virtual webcam\n";
return 1;
}
I also tried doing something like
virtualWebcam.open("rtsp://localhost:8554/stream", cv::CAP_FFMPEG,cv::VideoWriter::fourcc('M', 'J','P', 'G'),camera_frame_rate,cv::Size(static_cast < int > (camera_frame_width), static_cast < int > (camera_frame_height)), true);
and tried cv::CAP_GSTREAMER
but this is also not working
Any help is much appreciated.Thanks
I want to send webcam video stream to a RTSP server/make a RTSP server from here only and send the stream over it
答案1
得分: 3
使用 cv::VideoWriter
创建RTSP流支持使用 cv::CAP_GSTREAMER
后端,但不支持使用 cv::CAP_FFMPEG
后端。 使用GStreamer后端较为复杂,需要使用GStreamer构建OpenCV。 这篇文章 展示了使用GStreamer后端创建RTSP流的示例。 由于某种原因,使用GStreamer创建的流可以被GStreamer捕获,但不能被其他应用程序捕获(我找不到缺少什么)。
除了 cv::VideoWriter
,我们可以使用FFmpeg CLI。 我们可以将FFmpeg作为子进程执行,并将视频帧写入stdin管道 - 使用与我以下回答中描述的相同技术。
我们可以使用以下FFmpeg命令行:
ffmpeg -re -f rawvideo -r 10 -video_size 640x480 -pixel_format bgr24 -i pipe: -vcodec libx264 -crf 24 -pix_fmt yuv420p -f rtsp rtsp://localhost:8554/stream
-re
用于“实时流” - 减缓传输速度以帧速率的速度(用于模拟“虚拟网络摄像头”)。-r 10 -video_size 640x480 -pixel_format bgr24
- 定义10fps、640x480分辨率和BGR像素格式。-i pipe:
- 定义FFmpeg输入为stdin管道。-vcodec libx264 -crf 24 -pix_fmt yuv420p
- 定义H.264编解码器,质量标准(以及颜色子采样为YUV420)。-f rtsp rtsp://localhost:8554/stream
- 定义RTSP输出格式和输出流端口和名称。
要使用FFplay捕获RTSP流,执行C++代码之前,执行FFplay:
ffplay -rtsp_flags listen -i rtsp://localhost:8554/stream
以下的C++代码示例将合成的OpenCV图像发送到RTSP视频流:
#include <stdio.h>
#include "opencv2/opencv.hpp"
#include <string>
//为了使用FFplay接收RTSP视频流,在执行这个程序之前执行: "ffplay -rtsp_flags listen -i rtsp://localhost:8554/stream"。
int main()
{
// 10000帧,分辨率640x480,帧率10
int width = 640;
int height = 480;
int n_frames = 10000;
int fps = 10;
const std::string output_stream = "rtsp://localhost:8554/stream"; //发送RTSP到"localhost"的8554端口,流名为"stream"。
//作为子进程打开FFmpeg应用程序。
//FFmpeg输入PIPE: BGR颜色格式的RAW图像。
//FFmpeg输出: 使用H.264编解码器(使用libx264编码器)编码的RTSP流。
//添加'-re'减缓传输速度,使其与帧速率一致(用于模拟“虚拟网络摄像头”)。
std::string ffmpeg_cmd = std::string("ffmpeg -re -f rawvideo -r ") + std::to_string(fps) +
" -video_size " + std::to_string(width) + "x" + std::to_string(height) +
" -pixel_format bgr24 -i pipe: -vcodec libx264 -crf 24 -pix_fmt yuv420p -f rtsp " + output_stream;
//作为子进程执行FFmpeg,打开FFmpeg子进程的stdin管道以进行写入。
//在Windows中,我们需要使用_popen,在Linux中使用popen。
#ifdef _MSC_VER
FILE* pipeout = _popen(ffmpeg_cmd.c_str(), "wb"); //Windows(ffmpeg.exe必须在执行路径中)
#else
//https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-video-files-in-c-using-ffmpeg-part-2-video/
FILE* pipeout = popen(ffmpeg_cmd.c_str(), "w"); //Linux(假设ffmpeg存在于/usr/bin/ffmpeg中(并在路径中))。
#endif
cv::Mat frame = cv::Mat(height, width, CV_8UC3); //初始化帧。
for (int i = 0; i < n_frames; i++)
{
//为测试而构建合成图像(“渲染”视频帧):
frame = cv::Scalar(60, 60, 60); //用深灰色填充背景
cv::putText(frame, std::to_string(i+1), cv::Point(width/2 - 100*(int)(std::to_string(i+1).length()), height/2+100), cv::FONT_HERSHEY_DUPLEX, 10, cv::Scalar(255, 30, 30), 20); // 绘制蓝色数字
//cv::imshow("frame", frame); cv::waitKey(1); //用于测试显示帧
//将width*height*3字节写入FFmpeg子进程的stdin管道(假设帧数据在RAM中连续)。
fwrite(frame.data, 1, (size_t)width*height*3, pipeout);
}
#ifdef _MSC_VER
_pclose(pipeout); //Windows
#else
pclose(pipeout); //Linux
#endif
return 0;
}
注意:
为了避免需要提前执行FFplay,我们可以执行MediaMTX 作为“rtsp-simple-server”(在后台保持运行)。 然后也可以使用VLC视频播放器等工具接收流。
英文:
Creating RTSP stream using cv::VideoWriter
is supported using cv::CAP_GSTREAMER
backend, but not supported using cv::CAP_FFMPEG
backend.
Using GStreamer backend is complicated, and requires to build OpenCV with GStreamer.
The following post shows an example for creating RTSP stream using GStreamer backend.
For some reason the created stream can be captured using GStreamer, but can't be captured using other applications (I can't find what's missing).
Instead of cv::VideoWriter
, we may use FFmpeg CLI.
We may execute FFmpeg as sub-process, and write the video frames to stdin pipe - using the same technique that described in my following answer.
We may use FFmpeg command line as follows:
ffmpeg -re -f rawvideo -r 10 -video_size 640x480 -pixel_format bgr24 -i pipe: -vcodec libx264 -crf 24 -pix_fmt yuv420p -f rtsp rtsp://localhost:8554/stream
-re
is used for "live streaming" - slows down the transmission to rate of the frame-rate (for simulating a "virtual webcam").-r 10 -video_size 640x480 -pixel_format bgr24
- defines 10fps, 640x480 resolution, and BGR pixel format.-i pipe:
- Defines that FFmpeg input is stdin pipe.-vcodec libx264 -crf 24 -pix_fmt yuv420p
- Defines H.264 codec, with nominal quality (and color subsampling to YUV420).-f rtsp rtsp://localhost:8554/stream
- Defines the RTSP output format and output stream port and name.
For capturing the RTSP stream using FFplay, execute FFplay before executing the C++ code:
ffplay -rtsp_flags listen -i rtsp://localhost:8554/stream
The following C++ code sample send synthetic OpenCV images to RTSP video stream:
#include <stdio.h>
#include "opencv2/opencv.hpp"
#include <string>
//For receiving the RTSP video stream with FFplay, execute: "ffplay -rtsp_flags listen -i rtsp://localhost:8554/stream" before exectuing this program.
int main()
{
// 10000 frames, resolution 640x480, and 10 fps
int width = 640;
int height = 480;
int n_frames = 10000;
int fps = 10;
const std::string output_stream = "rtsp://localhost:8554/stream"; //Send RTSP to port 8554 of "localhost", with stream named "stream".
//Open FFmpeg application as sub-process.
//FFmpeg input PIPE : RAW images in BGR color format.
//FFmpeg output: RTSP stream encoded with H.264 codec (using libx264 encoder).
//Adding '-re' slows down the transmission to rate of the fps (for simulating a "virtual webcam").
std::string ffmpeg_cmd = std::string("ffmpeg -re -f rawvideo -r ") + std::to_string(fps) +
" -video_size " + std::to_string(width) + "x" + std::to_string(height) +
" -pixel_format bgr24 -i pipe: -vcodec libx264 -crf 24 -pix_fmt yuv420p -f rtsp " + output_stream;
//Execute FFmpeg as sub-process, open stdin pipe (of FFmpeg sub-process) for writing.
//In Windows we need to use _popen and in Linux popen
#ifdef _MSC_VER
FILE* pipeout = _popen(ffmpeg_cmd.c_str(), "wb"); //Windows (ffmpeg.exe must be in the execution path)
#else
//https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-video-files-in-c-using-ffmpeg-part-2-video/
FILE* pipeout = popen(ffmpeg_cmd.c_str(), "w"); //Linux (assume ffmpeg exist in /usr/bin/ffmpeg (and in path).
#endif
cv::Mat frame = cv::Mat(height, width, CV_8UC3); //Initialize frame.
for (int i = 0; i < n_frames; i++)
{
//Build synthetic image for testing ("render" a video frame):
frame = cv::Scalar(60, 60, 60); //Fill background with dark gray
cv::putText(frame, std::to_string(i+1), cv::Point(width/2 - 100*(int)(std::to_string(i+1).length()), height/2+100), cv::FONT_HERSHEY_DUPLEX, 10, cv::Scalar(255, 30, 30), 20); // Draw a blue number
//cv::imshow("frame", frame); cv::waitKey(1); //Show the frame for testing
//Write width*height*3 bytes to stdin pipe of FFmpeg sub-process (assume frame data is continuous in the RAM).
fwrite(frame.data, 1, (size_t)width*height*3, pipeout);
}
#ifdef _MSC_VER
_pclose(pipeout); //Windows
#else
pclose(pipeout); //Linux
#endif
return 0;
}
Note:
For avoiding the need of executing FFplay from advance, we may execute MediaMTX as "rtsp-simple-server" (keep it in running at the background).
Than it's also possible to receive the stream using VLC video player (for example).
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论