与Docker容器在Bash脚本执行中进行交互 [在容器内部]

huangapple go评论66阅读模式
英文:

Interact with docker container in the middle of a bash script execution [in that container]

问题

我想使用Python脚本启动一组Docker容器,使用subprocess库来实现。实际上,我正在尝试运行以下Docker命令:

docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"

并在新的终端窗口中运行它。

Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# 或者
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)

容器的CMD看起来是这样的。它是一个运行其他脚本和函数的bash脚本。

CMD ["bash", "/run_pipeline.sh"]

我想要做的是在某些条件满足时(即出现故障时)从其中一个嵌套脚本中运行一个交互式shell(bash),以便能够调查脚本中的问题,执行某些操作来修复问题并继续执行(或者如果无法修复则退出)。

if [ $? -ne 0 ]; then
  echo Investigate manually: "$REPO_NAME"
  bash
  if [ $? -ne 0 ]; then exit 33; fi
fi

我想要完全自动化这些操作,这样我就不必手动跟踪脚本的执行情况,当需要时执行docker attach...,因为我将同时运行多个这样的容器。

问题在于这个"救援"bash进程立即退出,我不知道为什么。我认为这可能与tty和其他相关的内容有关,但我尝试了很多方法,都没有成功。

我尝试了docker命令的不同-i-t-d的组合,尝试在使用-d启动容器后立即使用docker attach...,还尝试直接从终端中的bash中启动Python脚本(我默认使用PyCharm)。此外,我还尝试使用socatscreenscriptgetty命令(在嵌套的bash脚本中),但我不知道如何正确使用它们,所以结果也不佳。我现在已经迷惑得无法理解为什么它不起作用。

附加:

添加一个最小的不可重现示例(描述不起作用的部分)以说明我如何启动容器。

# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# ./small_example.py
from subprocess import Popen

if __name__ == '__main__':
    env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
    script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
    docker = f"docker run -it --rm {env_vars} {script} --name test_name test"

    # Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
    Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh

# 做一些繁重的工作

ls non/existent/path

if [ $? -ne 0 ]; then
  echo Investigate manually: "$REPO_NAME"
  bash
  if [ $? -ne 0 ]; then exit 33; fi
fi

似乎问题可能出现在run_pipeline.sh脚本中,但我不想在这里上传它,因为它比我之前描述的更混乱。但无论如何,我将说一下,我正在尝试运行这个东西 - https://github.com/IBM/D2A

所以我只是想要一些关于我可能遗漏的tty相关内容的建议。

英文:

I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command

docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"

in a new terminal window.

Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)

Container's CMD looks like this. It's a bash script that runs other scripts and funtions in them.

CMD ["bash", "/run_pipeline.sh"]

What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).

if [ $? -ne 0 ]; then
  echo Investigate manually: "$REPO_NAME"
  bash
  if [ $? -ne 0 ]; then exit 33; fi
fi

I want to do these fully automatically so I don't have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.

The problem is that this "rescue" bash process exits immediately and I don't know why. I think it should be something about ttys and stuff, but I've tried bunch of fiddling around with it and had no success.

I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don't know how to use them properly so it didn't end good as well. At this point I'm too confused to understand why it isn't working.

EDIT:

Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.

# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]

# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen

if __name__ == '__main__':
    env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
    script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
    docker = f"docker run -it --rm {env_vars} {script} --name test_name test"

    # Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
    Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh

# do some hard work

ls non/existent/path

if [ $? -ne 0 ]; then
  echo Investigate manually: "$REPO_NAME"
  bash
  if [ $? -ne 0 ]; then exit 33; fi
fi

It seems like the problem can be in a run_pipeline.sh script, but I don't want to upload it here, it's a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing - https://github.com/IBM/D2A.

So I just wanted some advice on a tty stuff that I am probably missing.

答案1

得分: 0

Run the initial container detached, with input and a tty.

docker run -dit --rm {env_vars} {script} --name test_name test

Monitor the container logs for the output, then attach to it.

Here is a quick script example (without a tty in this case, only because of the demo using echo to input)

#!/bin/bash

docker run --name test_name -id debian \
  bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'

while ! docker logs test_name | grep reading; do
  sleep 3
done

echo "attach input" | docker attach test_name

The complete output after it finishes:

$ docker logs test_name
start
reading
var=attach input

The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.

英文:

Run the initial container detached, with input and a tty.

docker run -dit --rm {env_vars} {script} --name test_name test

Monitor the container logs for the output, then attach to it.

Here is a quick script example (without a tty in this case, only because of the demo using echo to input)

#!/bin/bash

docker run --name test_name -id debian \
  bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'

while ! docker logs test_name | grep reading; do
  sleep 3
done

echo "attach input" | docker attach test_name

The complete output after it finishes:

$ docker logs test_name
start
reading
var=attach input

The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.

答案2

得分: 0

如我在对Matt的回答的评论中所说,他的解决方案在我的情况下也不起作用。我认为这是我正在运行的脚本的问题。我认为这是因为一些shell进程(https://i.stack.imgur.com/av4xF.jpg)占用了分配的tty,但我并不确定。

所以我想出了自己的解决办法。我简单地通过创建一个命名管道来阻止脚本的执行,然后读取它。
```bash
if [ $? -ne 0 ]; then
  echo 手动调查_make_:"$REPO_NAME"
  mkfifo &quot;/tmp/mypipe_$githash&quot; && echo &quot;/tmp/mypipe_$githash&quot; && read -r res < &quot;/tmp/mypipe_$githash&quot;
  if [ $res -ne 0 ]; then exit 33; fi
fi

然后我只需启动终端仿真器,并在其中执行docker exec以启动一个新的bash进程。我通过使用Docker Python SDK来监视容器的输出来做到这一点,这样我就知道何时启动终端。

def monitor_container_output(container):
    line = b&#39;&#39;
    for log in container.logs(stream=True):
        if log == b&#39;\n&#39;:
            print(line.decode())
            if b&#39;mypipe_&#39; in line:
                Popen(f&#39;xfce4-terminal -T {container.name} -e=&quot;docker exec -it {container.name} bash&quot;&#39;, shell=True).wait()
            line = b&#39;&#39;
            continue
        line += log


client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
                                  auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)

在我完成对新bash进程中问题的调查后,我将发送“调查状态代码”以告知脚本继续运行或退出。

echo 0 > &quot;/tmp/mypipe_$githash&quot;

<details>
<summary>英文:</summary>

As I said in a comment to Matt&#39;s answer, his solution in my situation does not work either. I think it&#39;s a problem with the script that I&#39;m running. I think it&#39;s because some of the many shell processes (https://i.stack.imgur.com/av4xF.jpg) are taking up allocated tty, but I don&#39;t know for sure.

So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
```bash
if [ $? -ne 0 ]; then
  echo Investigate _make_ manually: &quot;$REPO_NAME&quot;
  mkfifo &quot;/tmp/mypipe_$githash&quot; &amp;&amp; echo &quot;/tmp/mypipe_$githash&quot; &amp;&amp; read -r res &lt; &quot;/tmp/mypipe_$githash&quot;
  if [ $res -ne 0 ]; then exit 33; fi
fi

Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.

def monitor_container_output(container):
    line = b&#39;&#39;
    for log in container.logs(stream=True):
        if log == b&#39;\n&#39;:
            print(line.decode())
            if b&#39;mypipe_&#39; in line:
                Popen(f&#39;xfce4-terminal -T {container.name} -e=&quot;docker exec -it {container.name} bash&quot;&#39;, shell=True).wait()
            line = b&#39;&#39;
            continue
        line += log


client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
                                  auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)

After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.

echo 0 &gt; &quot;/tmp/mypipe_$githash&quot;

huangapple
  • 本文由 发表于 2023年2月6日 06:16:00
  • 转载请务必保留本文链接:https://go.coder-hub.com/75355890.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定