英文:
Multiple format changes in FFmpeg for thermal camera
问题
我遇到了生成用于处理UVC热像仪的原始数据的命令的问题,以便对其进行着色,然后将其输出到虚拟设备,并打算通过RTSP进行流式传输。这是在树莓派3B+上,使用32位Bullseye系统。
原始代码完美运行预览的内容如下:
ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo - | ffplay -pixel_format gray16le -video_size 256x192 -f rawvideo -i - -vf 'normalize=smoothing=10, format=pix_fmts=rgb48, pseudocolor=p=inferno'
基本上,这个命令将原始数据截取出有用的部分,然后将其通过管道传递给ffplay,其中它被视为16位灰度(在这种情况下是gray16le),然后进行归一化处理,格式化为48位RGB,然后应用伪彩色滤镜。
我无法将其转化为仅使用ffmpeg,因为它会引发编解码器错误或格式错误,或者将16位转化为10位,尽管我需要16位。我尝试使用v4l2loopback和两个独立窗口中的ffmpeg实例来查明错误实际上发生在哪里,但我怀疑这会引入更多格式问题,分散注意力,从而偏离了原始问题。我最接近的尝试是:
首先运行以下命令:
ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo /dev/video3
然后运行以下命令:
ffmpeg -video_size 256x192 -i /dev/video3 -f rawvideo -pix_fmt gray16le -vf 'normalize=smoothing=10,format=pix_fmts=rgb48, pseudocolor=p=inferno' -f rawvideo -f v4l2 /dev/video4
这将产生一幅未着色但有一定用途的图像,其中某些温度显示为丢失的像素,而不是使用ffplay命令,它显示为着色正确的流,没有丢失的像素。
您还希望使用正确的选项,以便日志不会滚动显示每一帧,并且要获取适用于初学者将命令转化为脚本的资源链接,尽管这超出了此问题的范围,但任何指导都将非常感激。
英文:
I'm having trouble generating a command to process output from a uvc thermal camera's raw data so that it can be colorized and then output to a virtual device with the intention of streaming it over rtsp. This is on a raspberry pi 3B+ with 32bit bullseye.
The original code that works perfectly for previewing it is:
ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo - | ffplay -pixel_format gray16le -video_size 256x192 -f rawvideo -i - -vf 'normalize=smoothing=10, format=pix_fmts=rgb48, pseudocolor=p=inferno'
Essentially what this is doing is taking the raw data, cutting the useful portion out, then piping it to ffplay where it is seen as 16bit grayscale (in this case gray16le), then it is normalized, formatted to 48 bit rgb and then a pseudocolor filter is applied.
I haven't been able to get this to translate into ffmpeg-only because it throws codec errors or format errors or converts the 16bit to 10bit even though I need the 16bit. I have tried using v4l2loopback and two instances of ffmpeg in separate windows to see if I could figure out where the error was actually occuring but I suspect that is introducing more format issues that are distracting from the original problem. The closest I have been able to get is
ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo /dev/video3
Followed by
ffmpeg -video_size 256x192 -i /dev/video3 -f rawvideo -pix_fmt gray16le -vf 'normalize=smoothing=10,format=pix_fmts=rgb48, pseudocolor=p=inferno' -f rawvideo -f v4l2 /dev/video4
This results in a non colorized but somewhat useful image with certain temperatures showing as missing pixels as opposed to the command with ffplay where it shows a properly colorized stream without missing pixels.
I'll include my configuration and log from the preview command but the log doesn't show errors unless I try to modify parameters and presumably mess up the syntax.
ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo - | ffplay -pixel_format gray16le -video_size 256x192 -f rawvideo -i - -vf 'normalize=smoothing=10, format=pix_fmts=rgb48, pseudocolor=p=inferno'
ffplay version N-109758-gbdc76f467f Copyright (c) 2003-2023 the FFmpeg developers
built with gcc 10 (Raspbian 10.2.1-6+rpi1)
configuration: --prefix=/usr/local --enable-nonfree --enable-gpl --enable-hardcoded-tables --disable-ffprobe --disable-ffplay --enable-libx264 --enable-libx265 --enable-sdl --enable-sdl2 --enable-ffplay
libavutil 57. 44.100 / 57. 44.100
libavcodec 59. 63.100 / 59. 63.100
libavformat 59. 38.100 / 59. 38.100
libavdevice 59. 8.101 / 59. 8.101
libavfilter 8. 56.100 / 8. 56.100
libswscale 6. 8.112 / 6. 8.112
libswresample 4. 9.100 / 4. 9.100
libpostproc 56. 7.100 / 56. 7.100
ffmpeg version N-109758-gbdc76f467f Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 10 (Raspbian 10.2.1-6+rpi1)
configuration: --prefix=/usr/local --enable-nonfree --enable-gpl --enable-hardcoded-tables --disable-ffprobe --disable-ffplay --enable-libx264 --enable-libx265 --enable-sdl --enable-sdl2 --enable-ffplay
libavutil 57. 44.100 / 57. 44.100
libavcodec 59. 63.100 / 59. 63.100
libavformat 59. 38.100 / 59. 38.100
libavdevice 59. 8.101 / 59. 8.101
libavfilter 8. 56.100 / 8. 56.100
libswscale 6. 8.112 / 6. 8.112
libswresample 4. 9.100 / 4. 9.100
libpostproc 56. 7.100 / 56. 7.100
Input #0, video4linux2,v4l2, from '/dev/video0':B sq= 0B f=0/0
Duration: N/A, start: 242.040935, bitrate: 39321 kb/s
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 256x384, 39321 kb/s, 25 fps, 25 tbr, 1000k tbn
Stream mapping:.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
Press [q] to stop, [?] for help
Output #0, rawvideo, to 'pipe:': 0KB vq= 0KB sq= 0B f=0/0
Metadata:
encoder : Lavf59.38.100
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422(tv, progressive), 256x192, q=2-31, 19660 kb/s, 25 fps, 25 tbn
Metadata:
encoder : Lavc59.63.100 rawvideo
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbInput #0, rawvideo, from 'fd:': 0KB vq= 0KB sq= 0B f=0/0
Duration: N/A, start: 0.000000, bitrate: 19660 kb/s
Stream #0:0: Video: rawvideo (Y1[0][16] / 0x10003159), gray16le, 256x192, 19660 kb/s, 25 tbr, 25 tbn
frame= 13 fps=0.0 q=-0.0 size= 1152kB time=00:00:00.52 bitrate=18148.4kbitsframe= 25 fps= 24 q=-0.0 size= 2304kB time=00:00:01.00 bitrate=18874.4kbitsframe= 39 fps= 25 q=-0.0 size= 3648kB time=00:00:01.56 bitrate=19156.7kbitsframe= 51 fps= 24 q=-0.0 size= 4800kB time=00:00:02.04 bitrate=19275.3kbitsframe= 64 fps= 24 q=-0.0 size= 6048kB time=00:00:02.56 bitrate=19353.6kbitsframe= 78 fps= 25 q=-0.0 size= 7392kB time=00:00:03.12 bitrate=19408.7kbits
I'd also like to use the correct option so it isn't scrolling though every frame in the log as well as links to resources for adapting a command to a script for beginners even though that's outside the purview of this question so any direction on those would be much appreciated.
答案1
得分: 0
@gyan的问题给了我一个思路,我意识到原始代码是分阶段给出的,以展示发生了什么背后的理论,而我把它复杂化了。我成功地使用以下命令来使其工作:
ffmpeg -input_format gray16le -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2), normalize=smoothing=10, pseudocolor=p=inferno' -pix_fmt yuv420p -f v4l2 /dev/video2
英文:
@gyan 's question gave me an idea and I realized that the original code was given in stages to show the theory behind what was happening and I was over complicating it. I was able to get it working with
ffmpeg -input_format gray16le -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2), normalize=smoothing=10, pseudocolor=p=inferno' -pix_fmt yuv420p -f v4l2 /dev/video2
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论