Skip to main content

Deploy AKS into a corporate landing zone

Introduction

in addition to Managed Kubernetes Services (K8SAAS), MCS provides a way to deploy AKS by his own into a corporate landing zone. The purpose of this documentation is to precise the how to proceed.

Use cases:

  • K8SAAS does not provide a specific features
  • you expect to deploy a private AKS cluster (meaning API Master will be private)
  • You are not interested by delegating part of the recurring activities such as monitoring and upgrades

Feature description

Landing zone with corporate addon does not allow public exposure of workload. Regarding AKS, this means:

  • the control plane must be private and attached to the virtual network of the subscription
  • public exposed application are blocked on the azure side. You can still try to create a public exposed application but the deployment will fail (details in kubectl events)
  • a dedicated private DNS zone is necessary for the control plane. Only the format <subzone>.private.<region>.azmk8s.io is supported (https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=azure-portal#configure-a-private-dns-zone)
  • only a user with the lead dev role car create a cluster: a role assignment is needed to allow the cluster to interact with the network

The reserved network size is only compatible with CNI overlay and Kubenet.

Enabler

terraform code example to deploy a private AKS into a corporate landing zone

This example is extracted from our automated test repository

resource "azurerm_private_dns_zone" "this" {
name = "aks-endpoint-${var.random_id}.privatelink.westeurope.azmk8s.io"
resource_group_name = var.rg_name
}

resource "azurerm_role_assignment" "dns_zone_contributor" {
scope = azurerm_private_dns_zone.this.id
principal_id = azurerm_user_assigned_identity.this.principal_id
role_definition_name = "Private DNS Zone Contributor"
}

resource "azurerm_user_assigned_identity" "this" {
name = "aks-id-${var.random_id}"
location = var.location
resource_group_name = var.rg_name
}


resource "azurerm_role_assignment" "vnet_contributor" {
scope = data.azurerm_subscription.current.id #data.azurerm_virtual_network.this.id
principal_id = azurerm_user_assigned_identity.this.principal_id
role_definition_name = "Network Contributor"
}



resource "azurerm_kubernetes_cluster" "this" {
name = "aks-${var.random_id}"
location = var.location
resource_group_name = var.rg_name
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
vnet_subnet_id = data.azurerm_subnet.pool.id
enable_node_public_ip = false
}

private_dns_zone_id = azurerm_private_dns_zone.this.id

dns_prefix = "api"

private_cluster_enabled = true

identity {
type = "UserAssigned"
identity_ids = [azurerm_user_assigned_identity.this.id]
}

network_profile {
network_plugin = "kubenet"


outbound_type = "userAssignedNATGateway"
}

depends_on = [
azurerm_role_assignment.dns_zone_contributor,
azurerm_role_assignment.vnet_contributor
]

}

Explanation:

  • the private dns zone is needed to create a global name for the api endpoint
  • the identity is used to manipulate the network to attach the cluster nodes and configure the dns zone
  • private_cluster_enabled creates the private api endpoint on the subnet vnet_subnet_id
  • the depends_on block is necessary to avoid creating the cluster before the rights are assigned to the managed identity