GCP 日志成本归因

huangapple go评论61阅读模式
英文:

GCP Log Cost Attribution

问题

我有一个项目上运行了多个服务,假设我有2个GAE(Google App Engine)。第一个是service_a,第二个是service_b。这两个服务都会写日志。我已经给这两个GAE贴了标签,但似乎标签并没有包括日志费用。有没有办法区分service_aservice_b的日志费用?

英文:

I have multiple service on a single project, lets say i have 2 GAE (Google App Engine)
The first one is service_a and the second one is service_b. Both service write logs. I've already label both GAE, but it's seems the label is not included the log cost. Is there any way to differentiate the log cost between service_a and service_b?

答案1

得分: 1

以下是您要翻译的内容:

"after wasting my time for few days
i've found alternative solution for this

on default, you will have 2 log storage & 2 log router sink

  1. _Default
  2. _Required (ignore this)

from my understanding

  1. log storage is a storage that will store your log
  2. log router sink is a rule that will decided where the log will be stored

on default, your log will be store on _Default log

here the default state on terraform

resource "google_logging_project_bucket_config" "_Default" {
project = data.google_project.project.project_id
location = "global"
retention_days = 30
bucket_id = "_Default"
description = "Default"
}

resource "google_logging_project_sink" "_default" {
name = "_Default"
disabled = false
destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/_Default"

filter = "NOT LOG_ID("cloudaudit.googleapis.com/activity") AND NOT LOG_ID("externalaudit.googleapis.com/activity") AND NOT LOG_ID("cloudaudit.googleapis.com/system_event") AND NOT LOG_ID("externalaudit.googleapis.com/system_event") AND NOT LOG_ID("cloudaudit.googleapis.com/access_transparency") AND NOT LOG_ID("externalaudit.googleapis.com/access_transparency")"

unique_writer_identity = true
}

So the solution is by create the log storage and log router sink for each your service,
you can look terraform code below (this case, i have 1 GAE and 1 Dataflow)

resource "google_logging_project_bucket_config" "_Default" {
project = data.google_project.project.project_id
location = "global"
retention_days = 30
bucket_id = "_Default"
description = "Default"
}

resource "google_logging_project_bucket_config" "ingestion_dataflow_logging_bucket" {
project = data.google_project.project.project_id
location = "global"
retention_days = 30
bucket_id = "ingestion_dataflow"
description = "ingestion dataflow logging bucket"
}

resource "google_logging_project_bucket_config" "gae_minerva_logging_bucket" {
project = data.google_project.project.project_id
location = "global"
retention_days = 30
bucket_id = "gae_minerva"
description = "gae minerva logging bucket"
}

resource "google_logging_project_sink" "_default" {
name = "_Default"
disabled = false
destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/_Default"

filter = "NOT LOG_ID("cloudaudit.googleapis.com/activity") AND NOT LOG_ID("externalaudit.googleapis.com/activity") AND NOT LOG_ID("cloudaudit.googleapis.com/system_event") AND NOT LOG_ID("externalaudit.googleapis.com/system_event") AND NOT LOG_ID("cloudaudit.googleapis.com/access_transparency") AND NOT LOG_ID("externalaudit.googleapis.com/access_transparency")"

exclusions {
name = "dataflow-ingestion-exclusions"
disabled = false
filter = "resource.type="dataflow_step" AND jsonPayload.worker:"dataflow_ingestion_worker""
}
exclusions {
name = "gae-minerva-exclusions"
disabled = false
filter = "resource.type="gae_app" AND resource.labels.module_id:"minerva_project""
}

unique_writer_identity = true
}

resource "google_logging_project_sink" "ingestion_dataflow_logging_sink" {
name = "ingestion_dataflow"
disabled = false
destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/ingestion_dataflow"

filter = "resource.type="dataflow_step" AND jsonPayload.worker:"dataflow_ingestion_worker""

unique_writer_identity = true
}

resource "google_logging_project_sink" "gae_minerva_logging_sink" {
name = "gae_minerva"
disabled = false
destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/gae_minerva"

filter = "resource.type="gae_app" AND resource.labels.module_id:"minerva_project""
unique_writer_identity = true
}

by creating storage bucket for each services, you can see the log size. therefore you can estimate the log cost for each service"

英文:

after wasting my time for few days
i've found alternative solution for this

on default, you will have 2 log storage & 2 log router sink

  1. _Default
  2. _Required (ignore this)

from my understanding

  1. log storage is a storage that will store your log
  2. log router sink is a rule that will decided where the log will be stored

on default, your log will be store on _Default log

here the default state on terraform

resource "google_logging_project_bucket_config" "_Default" {
  project        = data.google_project.project.project_id
  location       = "global"
  retention_days = 30
  bucket_id      = "_Default"
  description    = "Default"
}

resource "google_logging_project_sink" "_default" {
  name        = "_Default"
  disabled    = false
  destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/_Default"

  filter = "NOT LOG_ID(\"cloudaudit.googleapis.com/activity\") AND NOT LOG_ID(\"externalaudit.googleapis.com/activity\") AND NOT LOG_ID(\"cloudaudit.googleapis.com/system_event\") AND NOT LOG_ID(\"externalaudit.googleapis.com/system_event\") AND NOT LOG_ID(\"cloudaudit.googleapis.com/access_transparency\") AND NOT LOG_ID(\"externalaudit.googleapis.com/access_transparency\")"


  unique_writer_identity = true
}

So the solution is by create the log storage and log router sink for each your service,
you can look terraform code below (this case, i have 1 GAE and 1 Dataflow)

resource "google_logging_project_bucket_config" "_Default" {
  project        = data.google_project.project.project_id
  location       = "global"
  retention_days = 30
  bucket_id      = "_Default"
  description    = "Default"
}

resource "google_logging_project_bucket_config" "ingestion_dataflow_logging_bucket" {
  project        = data.google_project.project.project_id
  location       = "global"
  retention_days = 30
  bucket_id      = "ingestion_dataflow"
  description    = "ingestion dataflow logging bucket"
}

resource "google_logging_project_bucket_config" "gae_minerva_logging_bucket" {
  project        = data.google_project.project.project_id
  location       = "global"
  retention_days = 30
  bucket_id      = "gae_minerva"
  description    = "gae minerva logging bucket"
}

resource "google_logging_project_sink" "_default" {
  name        = "_Default"
  disabled    = false
  destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/_Default"

  filter = "NOT LOG_ID(\"cloudaudit.googleapis.com/activity\") AND NOT LOG_ID(\"externalaudit.googleapis.com/activity\") AND NOT LOG_ID(\"cloudaudit.googleapis.com/system_event\") AND NOT LOG_ID(\"externalaudit.googleapis.com/system_event\") AND NOT LOG_ID(\"cloudaudit.googleapis.com/access_transparency\") AND NOT LOG_ID(\"externalaudit.googleapis.com/access_transparency\")"

  exclusions {
    name     = "dataflow-ingestion-exclusions"
    disabled = false
    filter   = "resource.type=\"dataflow_step\" AND jsonPayload.worker:\"dataflow_ingestion_worker\""
  }
  exclusions {
    name     = "gae-minerva-exclusions"
    disabled = false
    filter   = "resource.type=\"gae_app\" AND resource.labels.module_id:\"minerva_project\""
  }

  unique_writer_identity = true
}

resource "google_logging_project_sink" "ingestion_dataflow_logging_sink" {
  name        = "ingestion_dataflow"
  disabled    = false
  destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/ingestion_dataflow"

  filter = "resource.type=\"dataflow_step\" AND jsonPayload.worker:\"dataflow_ingestion_worker\""

  unique_writer_identity = true
}

resource "google_logging_project_sink" "gae_minerva_logging_sink" {
  name        = "gae_minerva"
  disabled    = false
  destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/gae_minerva"

  filter                 = "resource.type=\"gae_app\" AND resource.labels.module_id:\"minerva_project\""
  unique_writer_identity = true
}

by creating storage bucket for each services, you can see the log size. therefore you can estimate the log cost for each service

答案2

得分: 0

你可以尝试通过价格计算器来解决此问题。在这个计算器中,搜索云操作产品并输入所需的值。在这里,你应该知道日志数据发送到服务的数量。通过输入这些数据值,我们可以了解每个服务的日志成本。

以下是供您参考的示例:

如果服务A接收了100 Gib的日志数据量,那么我们需要输入如下屏幕截图-1中所示的值。

GCP 日志成本归因

在输入值后,点击估算,我们将收到估算的成本,如下屏幕截图-2所示。

GCP 日志成本归因

类似地,我们需要检查服务B。这只是针对您的问题的建议。

根据@JohnHanley的评论,如果没有这样的方法,**请在公共问题跟踪器**上提出功能请求,并描述您的问题。这个问题跟踪器是供最终用户报告错误并请求改进谷歌云产品的论坛。来自Google的产品工程团队将在这个实施上工作。

英文:

Can you try by exploring this Price Calculator as a workaround. In this Calculator search for cloud Operations product and enter the values it is asking. Here, you should know the volumes of logs data received to a service. By entering these data values , we can know the log cost of each service.

Below is an example given for your reference :

If service A received 100 Gib volume of logs then we need to enter the value as shown in the below screenshot- 1.

GCP 日志成本归因

Post entering the value and then clicking on estimate we will receive the estimated cost as shown in below screenshot-2.

GCP 日志成本归因

Similarly we need to check for the Service B. This is just a suggestion for your query

And also As per @JohnHanley comments, if there is no such method. Raise this as a feature request at the Public Issue Tracker report with the description of your issue . This Issue Tracker is a forum for end users to report bugs and request features to improve google cloud products. Product engineering from Google will work on this implementation.

huangapple
  • 本文由 发表于 2023年5月22日 12:26:23
  • 转载请务必保留本文链接:https://go.coder-hub.com/76303030.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定