Python Flask:定期将实时流视频保存到文件中

huangapple go评论66阅读模式
英文:

Python Flask: Saving live stream video to a file periodically

问题

我正在创建一个使用JavaScript的Flask应用程序,用于将实时视频流保存到文件中。

我试图实现的目标是定期(每20秒)将视频流发送到Flask应用程序。第一次会创建一个视频文件,然后需要将视频合并到现有文件中。

我正在使用SocketIO从JS传输视频。

async function startCapture() {
  try {
    // 访问用户的摄像头
    stream = await navigator.mediaDevices.getUserMedia({ 
      video: true,
      audio: { echoCancellation: true, noiseSuppression: true },
    });

    // 将流附加到视频元素
    video.srcObject = stream;

    // 创建一个新的MediaRecorder实例来捕获视频块
    recorder = new MediaRecorder(stream);

    // 从录制器接收到每个数据块的事件处理程序
    recorder.ondataavailable = (e) => {
      const videoBlob = e.data;
      transmitVideoChunk(videoBlob);
      chunks.push(e.data);
    };

    // 开始录制视频流
    recorder.start();

    // 启用/禁用按钮
    startButton.disabled = true;
    stopButton.disabled = false;

    // 开始以所需的fps传输视频块
    startTransmitting();
  } catch (error) {
    console.error('访问摄像头时出错:', error);
  }
}
function transmitVideoBlob() {
    const videoBlob = new Blob(chunks, { type: 'video/webm' });
    socket.emit('video_data', videoBlob);
    // 清空块数组
    chunks = [];
}

// 以所需的fps开始传输视频块
function startTransmitting() {
    const videoInterval = 20000; // 每帧之间的间隔以毫秒为单位
    videoIntervalId = setInterval(() => {
        transmitVideoBlob();
    }, videoInterval);
}

在Flask中,我创建了一个函数,该函数将调用create_videos

  • video_path:保存视频的位置
  • filename:视频文件名
  • new_video_data_blob:从JS接收到的二进制数据
def create_videos(video_path, filename, new_video_data_blob):
    chunk_filename = os.path.join(video_path, f"{str(uuid1())}_{filename}")
    final_filename = os.path.join(video_path, filename)
    out_final_filename = os.path.join(video_path, "out_" + filename)
    
    # 将当前视频块保存到文件
    with open(chunk_filename, "wb") as f:
        print("创建文件块 ", chunk_filename)
        f.write(new_video_data_blob)

    if not os.path.exists(final_filename):
        # 如果最终视频文件不存在,则重命名当前块文件
        os.rename(chunk_filename, final_filename)
    else:
        while not os.path.exists(chunk_filename):
            time.sleep(0.1)
        # 如果最终视频文件存在,则使用FFmpeg将当前块与现有文件连接起来
        try:
            subprocess.run(
                [
                    "ffmpeg",
                    "-i",
                    f"concat:{final_filename}|{chunk_filename}",
                    "-c",
                    "copy",
                    "-y",
                    out_final_filename,
                ]
            )
            while not os.path.exists(out_final_filename):
                time.sleep(0.1)
            os.remove(final_filename)
            os.rename(out_final_filename, final_filename)

        except Exception as e:
            print(e)
        # 删除当前块文件
        os.remove(chunk_filename)
    return final_filename

当我在JS中使用以下代码进行录音时:

audio: { echoCancellation: true, noiseSuppression: true },

我会收到以下错误:

[NULL @ 0x55e697e8c900] 无效的配置文件 5
[webm @ 0x55e697ec3180] 输出流 0:0 中的非单调DTS之前37075当前37020更改为37075这可能导致输出文件中的时间戳不正确
[NULL @ 0x55e697e8d8c0] 解析Opus数据包头时出错
    上一条消息重复 1[NULL @ 0x55e697e8c900] 无效的配置文件 5
[NULL @ 0x55e697e8d8c0] 解析Opus数据包头时出错

但当我仅录制视频时,它将正常工作。

如何将新的二进制数据合并到现有视频文件中?

英文:

I am creating a flask application with JavaScript to save the live video streams to a file.

What I am trying to achieve here is that, the video stream will be sent to flask application periodically (20 secs). The first time it will create a video and after that, the video needs to be merged to the existing file.

I am using SocketIO to transmit the video from JS.

`async function startCapture() {
  try {
    // Access the user's webcam
    stream = await navigator.mediaDevices.getUserMedia({ 
      video: true,
      audio: { echoCancellation: true, noiseSuppression: true },
    });

    // Attach the stream to the video element
    video.srcObject = stream;

    // Create a new MediaRecorder instance to capture video chunks
    recorder = new MediaRecorder(stream);

    // Event handler for each data chunk received from the recorder
    recorder.ondataavailable = (e) => {
      const videoBlob = e.data;
      transmitVideoChunk(videoBlob);
      chunks.push(e.data);
    };

    // Start recording the video stream
    recorder.start();

    // Enable/disable buttons
    startButton.disabled = true;
    stopButton.disabled = false;

    // Start transmitting video chunks at the desired fps
    startTransmitting();
  } catch (error) {
    console.error('Error accessing webcam:', error);
  }
}`
`
function transmitVideoBlob() {
    const videoBlob = new Blob(chunks, { type: 'video/webm' });
    socket.emit('video_data', videoBlob);
    // Clear the chunks array
    chunks = [];
}

// Start transmitting video chunks at the desired fps
function startTransmitting() {
    const videoInterval = 20000; // Interval between frames in milliseconds
    videoIntervalId = setInterval(() => {
        transmitVideoBlob();
    }, videoInterval);
}`

In flask, I have created a function, which will call create_videos.
video_path: location to save the video
filename: file name of video
new_video_data_blob: binary data received from JS

def create_videos(video_path, filename, new_video_data_blob):
    chunk_filename = os.path.join(video_path, f"{str(uuid1())}_{filename}")
    final_filename = os.path.join(video_path, filename)
    out_final_filename = os.path.join(video_path, "out_" + filename)
    # Save the current video chunk to a file
    with open(chunk_filename, "wb") as f:
        print("create file chunk ", chunk_filename)
        f.write(new_video_data_blob)

    if not os.path.exists(final_filename):
        # If the final video file doesn't exist, rename the current chunk file
        os.rename(chunk_filename, final_filename)
    else:
        while not os.path.exists(chunk_filename):
            time.sleep(0.1)
        # If the final video file exists, use FFmpeg to concatenate the current chunk with the existing file
        try:
            subprocess.run(
                [
                    "ffmpeg",
                    "-i",
                    f"concat:{final_filename}|{chunk_filename}",
                    "-c",
                    "copy",
                    "-y",
                    out_final_filename,
                ]
            )
            while not os.path.exists(out_final_filename):
                time.sleep(0.1)
            os.remove(final_filename)
            os.rename(out_final_filename, final_filename)

        except Exception as e:
            print(e)
        # Remove the current chunk file
        os.remove(chunk_filename)
    return final_filename

When I record as well using below code in JS

audio: { echoCancellation: true, noiseSuppression: true },

I get the following error.

[NULL @ 0x55e697e8c900] Invalid profile 5.
[webm @ 0x55e697ec3180] Non-monotonous DTS in output stream 0:0; previous: 37075, current: 37020; changing to 37075. This may result in incorrect timestamps in the output file.
[NULL @ 0x55e697e8d8c0] Error parsing Opus packet header.
    Last message repeated 1 times
[NULL @ 0x55e697e8c900] Invalid profile 5.
[NULL @ 0x55e697e8d8c0] Error parsing Opus packet header.

Python Flask:定期将实时流视频保存到文件中

But when I record video only, it will work fine.

How can I merge the new binary data to the existing video file?

答案1

得分: 0

似乎问题可能与FFMPEG如何处理音频编码有关,特别是与Opus音频编解码器有关。可能会导致音频或帧丢失的情况。这可能会引发错误。

为了解决这个问题,最好是分开处理视频和音频流,然后在最后合并它们。因此,基于时间戳合并会更好,不会遇到音频/视频丢失的问题。

英文:

It seems the issue might be related to how FFMPEG handles audio encoding, particularly with the Opus package codec. It might have the missing/lost audio or frames. That might occur the error.

To handle this issue, it might be better to handle the video and audio streams separately and then combine them in the end. Therefore, the merging would be better and wouldn't face with the audio/video missing problem based on timestamps.

答案2

得分: 0

看起来我不需要在每个间隔内合并视频。我可以在套接字断开连接时合并视频。以下是我为解决此问题所采取的步骤:

  • 从套接字接收数据

当从套接字接收数据时,我调用了"create_videos"函数。
对于单个会话,即直到套接字断开连接,二进制视频可以附加到二进制视频文件中,视频文件将正常工作。

注意:尝试在重新连接套接字后附加二进制数据时,新的数据块不会反映在视频中,尽管视频大小已增加。(不确定原因)

def create_videos(video_path, filename, new_video_data_blob):
    final_filename = os.path.join(video_path, filename)
    # 将当前视频块保存到文件中
    with open(final_filename, "ab") as f:
        print("创建文件块:", final_filename)
        f.write(new_video_data_blob)
        f.flush()  # 确保数据刷新到磁盘
        f.close()  # 正确关闭文件
  • 套接字断开连接

一旦会话(套接字连接和断开连接之间的时间段)结束并且套接字断开连接,将发生以下情况。

如果存在名为"session_1.webm"的文件,它将执行程序以压缩文件(在我这种情况下需要)并调用"merge_videos"函数。

在"merge_videos"函数中,如果不存在名为1.webm的现有文件(所需文件名),则只会重命名文件。

但如果存在,则会读取目录中的所有文件(按id分隔),这基本上总是有2个文件,并对它们进行排序,以使最新的文件排在最后。这将创建一个"txt"文件,其中包含用于连接的文件路径。

然后使用ffmpeg库来合并实际文件,而不是直接添加文件块到文件中。

@socketio.on("disconnect")
def disconnect():
    file_path = os.path.join("media", "1", "1", "recordings")
    session_file_name = "session_1.webm"
    new_file_name = f"{str(int(time.time()))}.webm"
    if os.path.exists(os.path.join(file_path, session_file_name)):
        os.rename(
            os.path.join(file_path, session_file_name),
            os.path.join(file_path, new_file_name),
        )
        # 压缩视频
        compress_video(
            os.path.join(file_path, new_file_name),
            os.path.join(file_path, "comp_" + new_file_name),
        )
        # 在压缩后与现有视频合并视频
        merge_videos(file_path, "1.webm", "comp_" + new_file_name)
    print("套接字连接结束")

def merge_videos(video_path, existing_file_name, session_file_name):
    session_file = os.path.join(video_path, session_file_name)
    final_filename = os.path.join(video_path, existing_file_name)
    out_final_filename = os.path.join(video_path, "out_" + session_file_name)

    if not os.path.exists(final_filename):
        os.rename(session_file, final_filename)
        return

    for subdirs, dirs, files in os.walk(video_path):
        videos_lis = []
        for file in files:
            if file.split(".")[1] != "webm":
                continue
            media_in = file
            videos_lis.append(media_in)
        videos_lis.sort()

        with open(f"{video_path}/input.txt", "w") as f:
            for videos in videos_lis:
                f.write(f"file '{videos}'\n")
    try:
        subprocess.run(
            [
                "ffmpeg",
                "-f",
                "concat",
                "-i",
                f"{video_path}/input.txt",
                "-y",
                out_final_filename,
            ]
        )
        os.remove(final_filename)
        os.remove(os.path.join(video_path, session_file_name))
        os.rename(out_final_filename, final_filename)
    except Exception as e:
        print(e)
    os.remove(os.path.join(video_path, "input.txt"))
    print("成功")

def compress_video(video_path, new_path):
    command = [
        "ffmpeg",
        "-i",
        video_path,
        "-y",
        "-crf",
        "40",
        new_path,
    ]
    result = subprocess.run(command)

    if result.returncode == 0:
        os.remove(video_path)
        print("命令成功执行。")
    else:
        print("执行命令时发生错误。")

注意:如果不创建txt文件并直接使用ffmpeg连接2个文件,将会出现以下错误:"输出流0:0中的非单调DTS,这可能导致输出文件中的时间戳不正确。"

因此,这是我当前解决问题的方法。

英文:

It seems that I don't need to merge the videos in every interval. I can merge the videos every time the socket is disconnected. Following are the steps I have done to solve the issue is as follows.

  • Receiving Data from sockets

When receiving data from sockets, I have called the "create_videos" function.
For a single session i.e. until the socket is disconnected, the binary video can be appended to a binary video file and video file will work fine.

Note: When trying to append binary data after socket is reconnected, the new chunk of data is not reflected in the video although the video size is increased. (Not sure why)

def create_videos(video_path, filename, new_video_data_blob):
final_filename = os.path.join(video_path, filename)
# Save the current video chunk to a file
with open(final_filename, "ab") as f:
print("create file chunk ", final_filename)
f.write(new_video_data_blob)
f.flush()  # Ensure data is flushed to disk
f.close()  # Properly close the file
  • Socket Disconnection

Once a session (period between socket connection and disconnection) is over and socket gets disconnected, the following will occur.

If a file by name "session_1.webm" exists, it will execute the program to compress file (needed in my case) and call the "merge_videos" function

In merge_videos function, if the existing file does not exists with name 1.webm (required file name), it will simply rename the file.

But if it does exists, it read all the files in the directory (separated by ids), which will basically always have 2 files and sort them so that the latest files comes at last.This will create a "txt" file which will hold paths of files for concatenation.

Then ffmpeg library is used to merge the actual files instead of directly adding chunks to the file.

@socketio.on("disconnect")
def disconnect():
file_path = os.path.join("media", "1", "1", "recordings")
session_file_name = "session_1.webm"
new_file_name = f"{str(int(time.time()))}.webm"
if os.path.exists(os.path.join(file_path, session_file_name)):
os.rename(
os.path.join(file_path, session_file_name),
os.path.join(file_path, new_file_name),
)
# Compress Video
compress_video(
os.path.join(file_path, new_file_name),
os.path.join(file_path, "comp_" + new_file_name),
)
# Merge Videos after compression with existing video
merge_videos(file_path, "1.webm", "comp_" + new_file_name)
print("Socket Connection Ended")
def merge_videos(video_path, existing_file_name, session_file_name):
session_file = os.path.join(video_path, session_file_name)
final_filename = os.path.join(video_path, existing_file_name)
out_final_filename = os.path.join(video_path, "out_" + session_file_name)
if not os.path.exists(final_filename):
os.rename(session_file, final_filename)
return
for subdirs, dirs, files in os.walk(video_path):
videos_lis = []
for file in files:
if file.split(".")[1] != "webm":
continue
media_in = file
videos_lis.append(media_in)
videos_lis.sort()
with open(f"{video_path}/input.txt", "w") as f:
for videos in videos_lis:
f.write(f"file '{videos}'\n")
try:
subprocess.run(
[
"ffmpeg",
"-f",
"concat",
"-i",
f"{video_path}/input.txt",
"-y",
out_final_filename,
]
)
os.remove(final_filename)
os.remove(os.path.join(video_path, session_file_name))
os.rename(out_final_filename, final_filename)
except Exception as e:
print(e)
os.remove(os.path.join(video_path, "input.txt"))
print("Success")
def compress_video(video_path, new_path):
command = [
"ffmpeg",
"-i",
video_path,
"-y",
"-crf",
"40",
new_path,
]
result = subprocess.run(command)
if result.returncode == 0:
os.remove(video_path)
print("Command executed successfully.")
else:
print("An error occurred while executing the command.")

Note: If I do not create a txt file and directly concat 2 files using ffmpeg, I get the following error. "Non-monotonous DTS in output stream 0:0 This may result in incorrect timestamps in the output file."

So, this is my current solution to the problem

huangapple
  • 本文由 发表于 2023年7月17日 16:04:35
  • 转载请务必保留本文链接:https://go.coder-hub.com/76702535.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定