使用Terraform增加AKS的default_node_pool中max_pods数量时,必须重新创建集群本身。

huangapple go评论97阅读模式
英文:

When using Terraform to increase the number of max_pods in the default_node_pool of AKS, you must recreate the cluster itself

问题

Here's the translation of the code and relevant information you provided:

  1. 使用Terraform,我们将AKS的default_node_pool的max_pods从20调整为30。
  2. network_policy和network_plugin都是"azure"。

请注意,这是您提供的翻译,不包括代码部分。如果您有其他需要,请随时提出。

英文:
  1. Using Terraform, we adjusted the number of max_pods of default_node_pool of AKS to 20 -> 30.
  2. network_policy and network_plugin is a "azure"

The code is as follows.

  • var.tf
  1. variable "system_rg" {
  2. type = string
  3. default = "aks-test-resourcegroup"
  4. }
  5. variable "location" {
  6. type = string
  7. default = "Korea Central"
  8. }
  9. ###################
  10. # k8s cluster
  11. ###################
  12. variable "cluster_name" {
  13. default = "Test-AKS"
  14. }
  15. variable "aks_version" {
  16. type = string
  17. default = "1.25.5"
  18. }
  19. variable "private_cluster_enabled" {
  20. type = string
  21. default = "true"
  22. }
  23. variable "private_cluster_public_fqdn_enabled" {
  24. type = string
  25. default = "true"
  26. }
  27. variable "private_dns_zone_id" {
  28. type = string
  29. default = "None"
  30. }
  31. variable "sku_tier" {
  32. type = string
  33. default = "Free"
  34. }
  35. ###################
  36. # default_node_pool
  37. ###################
  38. variable "only_critical_addons_enabled" {
  39. type = string
  40. default = "true"
  41. }
  42. variable "temporary_name_for_rotation" {
  43. type = string
  44. default = "tempsys01"
  45. }
  46. variable "orchestrator_version" {
  47. type = string
  48. default = "1.25.5"
  49. }
  50. variable "agents_count" {
  51. type = number
  52. default = "3"
  53. }
  54. variable "agents_size" {
  55. type = string
  56. default = "Standard_D4s_v5"
  57. }
  58. variable "os_disk_size_gb" {
  59. description = "The size of the OS Disk which should be used for each agent in the Node Pool. Changing this forces a new resource to be created."
  60. type = number
  61. default = 256
  62. }
  63. variable "max_pods" {
  64. description = "The maximum number of pods that can run on each agent. Changing this forces a new resource to be created."
  65. type = number
  66. default = "30" # 20 => 30
  67. }
  68. ###################
  69. # linux_profile
  70. ###################
  71. variable "admin_username" {
  72. type = string
  73. default = "azureuser"
  74. }
  75. variable "ssh_public_key" {
  76. type = string
  77. default = ""
  78. }
  79. ###################
  80. # network_profile
  81. ###################
  82. variable "service_cidr" {
  83. type = string
  84. default = "10.254.0.0/24"
  85. }
  86. variable "dns_service_ip" {
  87. type = string
  88. default = "10.254.0.10"
  89. }
  90. variable "docker_bridge_cidr" {
  91. type = string
  92. default = "172.17.0.1/16"
  93. }
  94. # ###############################
  95. # # user_node_pool
  96. # ###############################
  97. variable "usernodepoo_vm" {
  98. description = "VM of AKS Cluster"
  99. type = map(any)
  100. default = {
  101. vm1 = {
  102. user_agents_name = "upool01"
  103. user_agents_size = "Standard_D4s_v5"
  104. user_agents_count = "4"
  105. user_agents_os_disk_size = "256"
  106. max_pods = "20"
  107. orchestrator_version = "1.25.5"
  108. }
  109. }
  110. }
  • cluster.tf
  1. ############################################################
  2. # AKS Cluster
  3. ############################################################
  4. resource "azurerm_kubernetes_cluster" "aks" {
  5. name = var.cluster_name
  6. location = var.location
  7. resource_group_name = data.azurerm_resource_group.aks-rg.name
  8. node_resource_group = "${var.system_rg}-node"
  9. dns_prefix = var.cluster_name
  10. kubernetes_version = var.aks_version
  11. private_cluster_enabled = var.private_cluster_enabled
  12. private_cluster_public_fqdn_enabled = var.private_cluster_public_fqdn_enabled
  13. private_dns_zone_id = var.private_dns_zone_id
  14. sku_tier = var.sku_tier
  15. default_node_pool {
  16. name = "syspool01"
  17. vm_size = var.agents_size
  18. os_disk_size_gb = var.os_disk_size_gb
  19. node_count = var.agents_count
  20. vnet_subnet_id = data.azurerm_subnet.subnet.id
  21. zones = [1, 2, 3]
  22. kubelet_disk_type = "OS"
  23. os_sku = "Ubuntu"
  24. os_disk_type = "Managed"
  25. ultra_ssd_enabled = "false"
  26. max_pods = var.max_pods
  27. only_critical_addons_enabled = var.only_critical_addons_enabled
  28. temporary_name_for_rotation = var.temporary_name_for_rotation
  29. orchestrator_version = var.aks_version
  30. }
  31. linux_profile {
  32. admin_username = var.admin_username
  33. ssh_key {
  34. key_data = replace(coalesce("${var.ssh_public_key}", tls_private_key.ssh[0].public_key_openssh), "\n", "")
  35. }
  36. }
  37. network_profile {
  38. network_plugin = "azure"
  39. network_policy = "azure"
  40. load_balancer_sku = "standard"
  41. outbound_type = "userDefinedRouting"
  42. service_cidr = var.service_cidr
  43. dns_service_ip = var.dns_service_ip
  44. }
  45. tags = {
  46. Environment = "${var.tag}"
  47. }
  48. identity {
  49. type = "SystemAssigned"
  50. }
  51. }
  52. ## usernodepool
  53. resource "azurerm_kubernetes_cluster_node_pool" "usernodepool" {
  54. for_each = var.usernodepoo_vm
  55. name = each.value.user_agents_name
  56. kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
  57. vm_size = each.value.user_agents_size
  58. os_disk_size_gb = each.value.user_agents_os_disk_size
  59. node_count = each.value.user_agents_count
  60. vnet_subnet_id = data.azurerm_subnet.subnet.id
  61. zones = [1, 2, 3]
  62. mode = "User"
  63. kubelet_disk_type = "OS"
  64. os_sku = "Ubuntu"
  65. os_disk_type = "Managed"
  66. ultra_ssd_enabled = "false"
  67. max_pods = each.value.max_pods
  68. orchestrator_version = each.value.orchestrator_version
  69. }

Applying this Terraform code will attempt to recreate the entire cluster. Is there a way to prevent this and just increase the number of max_pods?

I tried setting it up as below, but it was the same.

  1. resource "azurerm_kubernetes_cluster" "aks" {
  2. ...
  3. lifecycle {
  4. prevent_destroy = true
  5. }
  6. }
  1. Error: Instance cannot be destroyed
  2. on cluster.tf line 63:
  3. 63: resource "azurerm_kubernetes_cluster" "aks" {
  4. Resource azurerm_kubernetes_cluster.aks has lifecycle.prevent_destroy set, but the plan calls for this resource to be destroyed. To avoid this error and continue with the plan, either disable lifecycle.prevent_destroy or reduce the
  5. scope of the plan using the -target flag.

答案1

得分: 2

应用此Terraform代码将尝试重新创建整个集群。是否有一种方法可以防止这种情况,只增加max_pods的数量?

为了防止重新创建整个集群,只需更新max_pods的值。您可以使用Terraform生命周期配置块来管理资源在更新期间的行为。

以下是使用相同代码更新max_pods的示例代码,而不会销毁现有的AKS集群

  1. provider "azurerm" {
  2. features {}
  3. }
  4. resource "azurerm_resource_group" "aksdemo-rg" {
  5. name = "demo-rg-aks-test"
  6. location = "West Europe"
  7. }
  8. resource "azurerm_kubernetes_cluster" "hellaks" {
  9. name = "example-aks1"
  10. location = azurerm_resource_group.aksdemo-rg.location
  11. resource_group_name = azurerm_resource_group.aksdemo-rg.name
  12. dns_prefix = "exampleaks1"
  13. default_node_pool {
  14. name = "default"
  15. node_count = 3
  16. max_pods = 30
  17. vm_size = "Standard_D2_v2"
  18. temporary_name_for_rotation = "exampleaks1temp"
  19. }
  20. identity {
  21. type = "SystemAssigned"
  22. }
  23. tags = {
  24. Environment = "Production"
  25. }
  26. lifecycle {
  27. prevent_destroy = true
  28. }
  29. }

Terraform计划

使用Terraform增加AKS的default_node_pool中max_pods数量时,必须重新创建集群本身。

Terraform应用

使用Terraform增加AKS的default_node_pool中max_pods数量时,必须重新创建集群本身。

输出

使用Terraform增加AKS的default_node_pool中max_pods数量时,必须重新创建集群本身。

英文:

> Applying this Terraform code will attempt to recreate the entire cluster. Is there a way to prevent this and just increase the number of max_pods?

To prevent recreating the entire cluster and only update the max_pods value. You can use the Terraform lifecycle configuration block to manage the behavior of the resource during updates.

Here is sample code to update the max_pods with same code without destroying existing AKS cluster

  1. provider "azurerm" {
  2. features {}
  3. }
  4. resource "azurerm_resource_group" "aksdemo-rg" {
  5. name = "demo-rg-aks-test"
  6. location = "West Europe"
  7. }
  8. resource "azurerm_kubernetes_cluster" "hellaks" {
  9. name = "example-aks1"
  10. location = azurerm_resource_group.aksdemo-rg.location
  11. resource_group_name = azurerm_resource_group.aksdemo-rg.name
  12. dns_prefix = "exampleaks1"
  13. default_node_pool {
  14. name = "default"
  15. node_count = 3
  16. max_pods = 30
  17. vm_size = "Standard_D2_v2"
  18. temporary_name_for_rotation = "exampleaks1temp"
  19. }
  20. identity {
  21. type = "SystemAssigned"
  22. }
  23. tags = {
  24. Environment = "Production"
  25. }
  26. lifecycle {
  27. prevent_destroy = true
  28. }
  29. }

Terraform Plan

使用Terraform增加AKS的default_node_pool中max_pods数量时,必须重新创建集群本身。

Terraform Apply:

使用Terraform增加AKS的default_node_pool中max_pods数量时,必须重新创建集群本身。

Output:

使用Terraform增加AKS的default_node_pool中max_pods数量时,必须重新创建集群本身。

huangapple
  • 本文由 发表于 2023年6月9日 10:09:25
  • 转载请务必保留本文链接:https://go.coder-hub.com/76436762.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定