英文:
Best way to provide env variables (secrets) to docker compose that spins up containers on EC2 auto-scaling group
问题
我需要在EC2实例上使用Docker Compose启动一组Docker容器。EC2实例是自动缩放组的一部分,而Compose命令是作为“user-data”脚本的一部分在实例启动时执行的。
作为“user-data”脚本的一部分,我需要做两件事:
- 将“docker-compose.yaml”文件提供给实例。我计划将文件放入S3存储桶并下载它。(如果有更好的方法,请告诉我。)
- 为实例提供“docker-compose.yaml”所需的环境变量。我应该如何安全地做到这一点?环境变量包括数据库凭据等机密信息。
我阅读了Docker Compose环境变量的文档,因此可能的解决方案似乎有:
- 在“docker-compose.yaml”文件中使用“environment”属性硬编码环境变量。这样安全吗?
- 将变量放入单独的“env_file”中,并将它们也放在S3中。这基本上与#1相同。(但允许我们维护单独的环境配置。)
- 将变量作为Shell环境的一部分提供。
我希望有一种方法可以通过AWS上的某种受管密钥服务来实现#3。
我的要求是:
- 我不想在任何文件中硬编码机密,永远不要这样做。我不想在S3上有包含公司机密的文件。
- 理想情况下,管理机密应作为自动化terraform流水线的一部分来完成,这样我就不必手动复制/粘贴它们。
- 示例:每次我启动基础设施时,我希望terraform自动生成机密并将其保存在安全的位置,EC2可以访问它们。每次我拆除时,我希望机密被清除。
我是不是想得太多了?人们通常是如何做的?我认为这是一个常见的问题,但我还没有找到符合我的要求的明确示例。
英文:
I need to launch a set of Docker containers on an EC2 instance using docker compose. The EC2 instances are part of an auto-scaling group, and the compose command is executed as part of the user-data
script that runs on instance start up.
As part of the user-data
script, I need to do two things:
- Provide the
docker-compose.yaml
file to the instance. I plan to place the file into an S3 bucket and download it. (If there is a better way, let me know.) - Provide env variables, required by the
docker-compose.yaml
, to the instance. How do I securely do this? The env variables include secrets like database credentials.
I read docker compose env documentation, so the possible solutions seem to be:
- Hard-code env variables in the
docker-compose.yaml
file using theenvironment
attribute. Is that safe? - Place the variables into their own
env_file
, and put them on S3 too. That's basically the same as #1. (But allows us to maintain separate env configurations.) - Provide the variables as part of the shell environment.
I am hoping there is a way to do #3 using some sort of managed secret service on AWS?
My requirements are:
- I do not want to hard code the secrets in any file, ever. I don't want a file with company secrets sitting on S3.
- Ideally, managing secrets is done as part of an automated terraform pipeline, so I don't have to manually copy/paste them around.
- Example: Every time I spin up the infrastructure, I want the secrets to be automatically generated by terraform and saved somewhere secure, where EC2 can access them. Every time I tear down, I want the secrets to be wiped.
Am I overthinking this? How do people normally do this? I assume this is a common problem, but I haven't been able to find a clear example that fits my requirements.
答案1
得分: 2
我建议您考虑使用 ECS,而不是为 EC2 发明自己的 Docker 容器编排系统。然而,为了回答基本问题:
人们通常是如何做的?我认为这是一个常见的问题,但我还没有找到符合我的要求的清晰示例。
通过 user-data
提供秘密的正常方法是让 user-data
脚本调用 AWS Parameter Store 或 AWS Secrets Manager 来加载秘密。user-data
脚本将使用 AWS CLI 工具进行这些调用。分配给 EC2 实例的 IAM 实例配置文件需要提供适当的权限,以访问 Parameter Store 或 Secrets Manager 中的秘密,以及如果您使用 KMS CMK 对它们进行加密,则需要提供适当的权限来解密秘密。
英文:
I suggest looking into ECS instead of inventing your own docker container orchestration system for EC2. However, to answer the basic question:
> How do people normally do this? I assume this is a common problem, but I haven't been able to find a clear example that fits my requirements.
The normal way to provide secrets to something you are running via user-data
is to have the user-data
script call out to AWS Parameter Store, or AWS Secrets Manager to load secrets. The user-data
script would use the AWS CLI tool to make those calls. The IAM Instance Profile assigned to the EC2 instance would need to give the appropriate permissions to access the secret(s) in Parameter Store or Secretes Manager, as well as the appropriate permissions to decrypt the secrets if you are using a KMS CMK to encrypt them.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论