为@aws-cdk/aws-s3-deployment中的自定义资源Lambda创建日志组。

huangapple go评论42阅读模式
英文:

Create log group for custom resource lambda in @aws-cdk/aws-s3-deployment

问题

I am using the @aws-cdk/aws-s3-deployment module to upload files to S3 during deployment.

该模块用于在部署过程中将文件上传到S3。

The module creates a CustomResource with a Lambda Function to move the files from the assets bucket to my bucket. When the Lambda function runs, it auto-creates a log group, which:

该模块创建一个自定义资源,其中包含一个Lambda函数,用于将文件从资产存储桶移动到我的存储桶。当Lambda函数运行时,它会自动创建一个日志组,其中:

  • Has not retention set

  • Is not deleted with the stack

  • Has not tags (which I need for compliance)

  • 未设置保留期

  • 不会随堆栈一起删除

  • 没有标签(这是我需要的合规性要求)

My normal solution is to just create the Log Group myself! But for that I need to name the lambda function (so that I can create the log group with the same name).

我的常规解决方案是手动创建日志组!但为此,我需要为Lambda函数命名(以便我可以使用相同的名称创建日志组)。

@aws-cdk/aws-s3-deployment uses SingletonFunction, which can be passed a function name.
But @aws-cdk/aws-s3-deployment does not pass a function name.

@aws-cdk/aws-s3-deployment使用SingletonFunction,它可以传递一个函数名称。
但@aws-cdk/aws-s3-deployment并未传递函数名称。

Is there a way to set the name of the lambda function when using @aws-cdk/aws-s3-deployment?

在使用@aws-cdk/aws-s3-deployment时是否有设置Lambda函数名称的方法?

英文:

I am using the @aws-cdk/aws-s3-deployment module to upload files to s3 during deployment.

The module creates a CustomResource with a Lambda Function to move the files from the assets bucket to my bucket. When the Lambda function runs, it auto-creates a log group, which:

  • Has not retention set
  • Is not deleted with the stack
  • Has not tags (which I need for complicance)

My normal solution is to just create the Log Group myself! But for that I need to name the lambda function (so that I can create the log group with the same name).

@aws-cdk/aws-s3-deployment uses SingletonFunction, which can be passed a function name.
But @aws-cdk/aws-s3-deployment does not pass a function name.

Is there a way to set the name of the lambda function when using @aws-cdk/aws-s3-deployment?

答案1

得分: 0

通常情况下,当 CDK 中没有直接暴露某些内容时,您仍然可以使用逃逸挂钩来进行覆盖。在这种情况下,您可以通过引用 BucketDeployment 类的 .node.children 数组,找到一个子项是 SingletonFunction 的来直接访问底层的 SingletonFunction。然后,您可以覆盖函数名称或者(可能更好)在创建自定义日志组时引用它。

英文:

Usually, when something is not exposed in CDK directly, you can still overwrite it by using escape hooks. In this case, you can directly access underlying SingletonFunction by referencing .node.children array of the BucketDeployment class and finding a child that is SingletonFunction. You can then overwrite function name or (might be better) just reference it when creating custom log group.

答案2

得分: 0

S3-Deployment会在堆栈创建和删除时创建类似您描述的CustomResources。要设置日志组的保留期和标签,您必须自己创建资源。关键在于将其链接到自定义资源的底层Lambda函数,并将日志组添加为该Lambda的依赖项。但是,您无法更改SingletonFunction。我找到了一种方法来获取实际的底层Lambda函数,通过其名称将其链接到日志组,并设置依赖关系。这将确保整个堆栈,包括日志组,都将被正确删除。希望这能解决您的问题!

英文:

The S3-Deployment will create CustomResources that behave like you described upon stack creation and deletion.
To set the retention as well as tags of the log group, you have to create the resource yourself.
The crux is to link it to the underlying lambda function of the custom resource and add the log group as dependency to that lambda.
However, you cannot alter the SingletonFunction. I found a way to get the actual underlying lambda function, link it to a log group by its name, and set the dependencies. This will ensure that the whole stack including the log group will be deleted properly. I hope this will fix your problem!

from aws_cdk import (
    aws_ec2 as ec2,
    aws_s3 as s3,
    aws_s3_deployment as s3deploy,
    aws_logs as logs,
    Aws, Stack,
)
from constructs import Construct

class S3AssetDeployment(Construct):
    def __init__(
        self,
        scope: Construct,
        construct_id: str,
        vpc: ec2.Vpc,
        subnets: ec2.SubnetSelection,
    ):

        bucket = s3.Bucket(
            self,
            "AssetsBucket",
            bucket_name=f"fancy-project-assets-{Aws.ACCOUNT_ID}",
        )

        s3_deployment = s3deploy.BucketDeployment(
            self,
            "Assets",
            sources=[s3deploy.Source.asset(sync_directory)],
            destination_bucket=bucket,
            destination_key_prefix=sync_subdir,
            vpc=vpc,
            vpc_subnets=subnets,
        )

        # Find the underlying lambda function of the custom resource and overwrite its name
        all_stack_objects = Stack.of(s3_deployment).node.find_all()
        id_to_find = "Custom::CDKBucketDeployment"
        lambda_id = next(( item for item in [obj.node.id for obj in all_stack_objects] if id_to_find in item ), None)

        custom_resource_lambda = Stack.of(s3_deployment).node.find_child(lambda_id)
        custom_resource_lambda.node.default_child.add_property_override(
            "FunctionName", f"fancy-project-bucket-deployment"
        )

        # Create new log group with the same name and add the dependencies.
        deployment_log_group = logs.LogGroup(
            self,
            "BucketDeploymentLogGroup",
            log_group_name=f"/aws/lambda/fancy-project-bucket-deployment",
            removal_policy=RemovalPolicy.DESTROY,
            retention=logs.RetentionDays.TWO_WEEKS,
        )

        custom_resource_lambda.node.add_dependency(deployment_log_group)
        bucket.node.add_dependency(deployment_log_group)

huangapple
  • 本文由 发表于 2023年6月26日 23:10:45
  • 转载请务必保留本文链接:https://go.coder-hub.com/76557981.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定