英文:
Proper way to use bind-mount in Docker
问题
我有一个Django应用程序。我需要将其Docker化。内置数据库会不时更新。因此,我需要在主机上反映这些更改。我的DockerFile如下所示:
FROM python:3.11-slim-buster
ENV PYTHONUNBUFFERED=1
RUN pip install --upgrade pip
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
而docker-compose文件如下:
version: '3'
services:
better_half:
container_name: better-half-django
build:
context: .
command: bash -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
env_file:
- .env
ports:
- "8000:8000"
我使用绑定挂载来反映更改。在这种配置下,应用程序运行得很完美。但我不确定是否这是最佳实践。
我想知道最佳实践是什么。我应该在DockerFile中使用`copy`命令将所有项目代码复制到Docker镜像的app目录中吗?我是一个新手。有人能帮我吗?提前感谢。
英文:
I have a Django app. I need to dockerize it. The builtin database gets updated time to time. So I need to reflect the changes in host machine. My DockerFile is like this:
FROM python:3.11-slim-buster
ENV PYTHONUNBUFFERED=1
RUN pip install --upgrade pip
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
And the docker-compose file is:
version: '3'
services:
better_half:
container_name: better-half-django
build:
context: .
command: bash -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
env_file:
- .env
ports:
- "8000:8000"
I have used bind-mount to reflect the changes. The app runs perfectly in this configuration. But I am not sure if is it the best practice or not.
I want to know the best practice. Should I use copy
command in the DockerFile to copy all the project code to the app directory of the docker image? I am a newbie. Can anyone help me? Thanks in advance.
答案1
得分: 1
因为它允许您在运行容器时查看在您的计算机上进行的更改,而无需重建Docker镜像,像在您的Docker Compose文件中使用的绑定挂载一样,是开发环境中的一种流行选择。这有助于简化开发过程,因为您可以在本地编辑代码并快速观察这些更改的影响。
为了确保不同环境中的一致性,在处理生产情况时最好使用Docker镜像副本来存储您的代码。这种方法确保代码在镜像内部是自包含的,从而在各种环境中实现一致的部署。在部署过程中重要的是不要依赖于主机机器上的代码,而这种方法可以确保不会发生这种情况。
您的Dockerfile可以通过以下方式进行修改,以将您的Django应用程序代码包含到Docker镜像中:
FROM python:3.11-slim-buster
ENV PYTHONUNBUFFERED=1
RUN pip install --upgrade pip
WORKDIR /app
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
在Dockerfile将一切从当前目录复制到Docker镜像/app目录后,requirements.txt文件中指定的依赖项将在后续的pip install命令中进行安装。
要更新您的代码,需要使用这种技术重新构建Docker镜像。在使用docker-compose up启动容器之前,必须运行docker-compose build重新构建。
由于没有使用绑定挂载来同步代码更改,因此重要的是从Docker Compose文件中删除volumes部分。
在生产环境中,一致性和可重复性至关重要,使得这种设置是理想的。但是,为了开发目的,可以通过使用绑定挂载来实现更简化和更快速的反馈循环。
英文:
Because it lets you see changes made on your machine within a running container without needing to reconstruct the Docker image, a bind-mount - like the one used in your Docker Compose file - is a popular choice for development environments. This helps simplify the development process, since you can edit code locally and quickly observe the impact of those changes.
To ensure consistency in different environments, it's best to use a Docker image copy of your code when dealing with production situations. This approach ensures that the code is self-contained within the image, resulting in a consistent deployment across various environments. It's important to avoid relying on the host machine's code during the deployment process, and this method ensures that.
Your Dockerfile can be modified in the following manner to include your Django app code into the Docker image:
FROM python:3.11-slim-buster
ENV PYTHONUNBUFFERED=1
RUN pip install --upgrade pip
WORKDIR /app
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
The dependencies specified in the requirements.txt file will be installed by the subsequent pip install command after the Dockerfile copies everything from your current directory to the Docker image /app directory.
To update your code, a Docker image rebuild is necessary with this technique. docker-compose build must be run to rebuild before initiating the containers with docker-compose up.
Since bind-mounts are not being utilized to synchronize code changes, it is important to take out the volumes section from your Docker Compose file.
Consistency and reproducibility are essential in production environments, making this setup ideal. But for development purposes, a more streamlined and quicker feedback loop can be achieved through the use of bind-mounts.
答案2
得分: 1
Your image doesn't seem to contain any of the actual application code, but instead has it injected via a bind mount. I would not consider this a best practice. Normally the image should be completely self-contained. Consider deploying the application to a remote server: you should be able to install Docker and copy the image to the server as-is, without separately also copying the source code. (In practice you do need to copy the docker-compose.yml
file.)
你的镜像似乎没有包含实际的应用程序代码,而是通过绑定挂载注入了代码。我认为这不是最佳实践。通常,镜像应该完全自包含。考虑将应用程序部署到远程服务器:你应该能够安装Docker并将镜像原封不动地复制到服务器上,而无需额外复制源代码。(实际上,你确实需要复制docker-compose.yml
文件。)
You mention a database. A very common practice in Docker is to use a relational database in a separate container. There are prebuilt images for, for example, PostgreSQL that you can just use.
你提到了数据库。在Docker中非常常见的做法是在单独的容器中使用关系型数据库。例如,有预构建的 PostgreSQL 镜像,你可以直接使用。
So in your Dockerfile, do COPY
your application code in, and do declare the default CMD
your application should run.
因此,在你的Dockerfile中,确实使用 COPY
命令将你的应用程序代码复制进来,并确实声明应用程序应该运行的默认 CMD
。
FROM python:3.11-slim-buster
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# add
COPY ./ ./
CMD python manage.py runserver 0.0.0.0:8000
# (consider making manage.py executable to avoid saying `python` explicitly)
在你的Django配置中,通过环境变量来提供数据库设置将非常有帮助。特别是因为容器中的数据库主机名与主机开发环境中的不同,这将非常有帮助。
# settings.py
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("DATABASE_NAME", "mydatabase"),
"USER": os.environ.get("DATABASE_USER", "mydatabaseuser"),
"PASSWORD": os.environ.get("DATABASE_PASSWORD", "mypassword"),
"HOST": os.environ.get("DATABASE_HOST", "127.0.0.1"),
"PORT": os.environ.get("DATABASE_PORT", "5432")
}
}
现在,在你的 docker-compose.yml
文件中,你需要提供应用程序和它的数据库。我已经在Compose文件中写出了连接信息,但它也可以放在 .env
文件中。数据库的 ports:
不是必需的,但你可以添加它们以使数据库从主机访问,可能用于本地开发(如果主机上运行另一个PostgreSQL服务器,则只需更改第一个端口号)。
version: '3.8'
services:
better_half:
build: .
env_file:
- .env
ports:
- "8000:8000"
environment:
DATABASE_HOST: database
DATABASE_USER: postgres
DATABASE_NAME: postgres
DATABASE_PASSWORD: passw0rd
database:
image: postgres:15
volumes:
- dbdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: passw0rd
# ports: ['5432:5432']
volumes:
dbdata:
回答你最初的问题,注意在这个设置中根本没有使用绑定挂载。你可能需要备份和还原数据库以在其他地方运行应用程序,但不需要将任何代码与其Docker镜像分开。
英文:
Your image doesn't seem to contain any of the actual application code, but instead has it injected via a bind mount. I would not consider this a best practice. Normally the image should be completely self-contained. Consider deploying the application to a remote server: you should be able to install Docker and copy the image to the server as-is, without separately also copying the source code. (In practice you do need to copy the docker-compose.yml
file.)
You mention a database. A very common practice in Docker is to use a relational database in a separate container. There are prebuilt images for, for example, PostgreSQL that you can just use.
So in your Dockerfile, do COPY
your application code in, and do declare the default CMD
your application should run.
FROM python:3.11-slim-buster
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# add
COPY ./ ./
CMD python manage.py runserver 0.0.0.0:8000
# (consider making manage.py executable to avoid saying `python` explicitly)
In your Django configuration, it will help to make it possible to provide the database settings via environment variables. This will especially help since the database hostname will be different in a container from in your host development environment.
# settings.py
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("DATABASE_NAME", "mydatabase"),
"USER": os.environ.get("DATABASE_USER", "mydatabaseuser"),
"PASSWORD": os.environ.get("DATABASE_PASSWORD", "mypassword"),
"HOST": os.environ.get("DATABASE_HOST", "127.0.0.1"),
"PORT": os.environ.get("DATABASE_PORT", "5432")
}
}
Now in your docker-compose.yml
file you need to provide both the application and its database. I've written out the connection information in the Compose file here, but it could go into the .env
file as well. The database ports:
aren't required, but you can add them to make the database accessible from the host, possibly for local development (you may need to change the first port number only if you're running another PostgreSQL server on the host).
version: '3.8'
services:
better_half:
build: .
env_file:
- .env
ports:
- "8000:8000"
environment:
DATABASE_HOST: database
DATABASE_USER: postgres
DATABASE_NAME: postgres
DATABASE_PASSWORD: passw0rd
database:
image: postgres:15
volumes:
- dbdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: passw0rd
# ports: ['5432:5432']
volumes:
dbdata:
To answer your original question, note that there are no bind mounts at all in this setup. You may need to back up and restore the database to run the application somewhere else, but you do not need any of the code separate from their Docker images.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论