Provisioning an azure kubernetes cluster with Terraform (2024)

s1ntaxe770r

Posted on

Provisioning an azure kubernetes cluster with Terraform (3) Provisioning an azure kubernetes cluster with Terraform (4) Provisioning an azure kubernetes cluster with Terraform (5) Provisioning an azure kubernetes cluster with Terraform (6) Provisioning an azure kubernetes cluster with Terraform (7)

#devops #terraform #azure

In this post you will learn how to set up an Azure Kubernetes cluster using Terraform.

NOTE: This article assumes some basic knowledge of cloud concepts and the Microsoft Azure platform

Terraform ??

Terraform is an Infrastructure as code tool that allows developers and operations teams to automate how they provision their infrastructure.

Why write more code for my infrastructure ?

If you are new to Infrastructure as code it could seem like an extra step , when you could just click a few buttons on you cloud provider of choice's dashboard and be on your way. But IaC(Infrastructure as code) offers quite a few advantages.

  1. Because your infrastructure is now represented as code it is testable
  2. Your environments are now very much reproducible
  3. You can now track changes to your infrastructure over time with a version control system like Git
  4. Deployments are faster,because you interact with the cloud provider less.

Before diving into Terraform you need a brief understanding of

Hashicorp configuration language(HCL).

HCL ?

Yes Terraform uses a it's own configuration language, this may seem daunting at first but it's quite easy. Here's a quick peek at what it looks like.

resource "azurerm_resource_group" "resource-group" { name = "staging-resource-group" location = "West Europe"}

Your infrastructure in Terraform are represented as "resources", everything from networking to databases or virtual machines are all resources.

This is exactly what the resource block represents. Here we are creating an azurerm_resource_group as the name implies , it's a resource group. Resource groups are how you organize resources together, typical use case would be putting all your servers for single project under the same resource group.

Next we give the resource block a name , think of this as a variable name we can use throughout the Terraform file. Within the resource block we give our resource group a name, this is the name that would be given to our resource group in Azure. Finally we give a location where we want the resource group to be deployed.

If you are coming from something like Ansible you might notice how different Terraform's approach to configuration is, this is because Terraform uses whats known as an imperative style of configuration, simply put. In an imperative style of configuration you declare the state you want your infrastructure in and not how you want to achieve that state. You can learn more about declarative and imperative configuration here

Now that you have an idea of what Terraform configuration looks like lets dive in.

Project setup

Prerequisites:

Once you have all that setup, login to your Azure account through the command line using the following command

$ az login

Next clone the sample project.

$ git clone https://github.com/s1ntaxe770r/aks-terraform-demo.git

Before we begin we need to run terraform init. This would download any plugins that the Azure provider depends on.

$ terraform init

Taking a quick look at our folder structure you should have something like this.

.├── main.tf├── modules│ └── cluster│ ├── cluster.tf│ └── variables.tf├── README.md└── variables.tf2 directories, 6 files

Starting from the top lets look at main.tf

#main.tfterraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "2.39.0" } }}provider "azurerm" { features {}}module "cluster" { source = "./modules/cluster" ssh_key = var.ssh_key location = var.location kubernetes_version = var.kubernetes_version}

First we declare what Provider we are using, which is how Terraform knows what cloud platform we intend on using , this could be Google cloud , AWS or any other provider they support. You can learn more about Terraform providers here. Its also important to note that each provider block is usually in the documentation so you don't need to write this out each time.

Next we define a module block and pass it the folder where our module is located and a few variables.

Modules??

Modules in Terraform are a way to separate your configuration so each module can handle a specific task. Sure we could just dump all of our configuration in main.tf but that makes things clunky and less portable.

Now lets take a look at the cluster folder in modules directory.

modules/cluster├── cluster.tf└── variables.tf0 directories, 2 files

Lets take a look at cluster.tf

#modules/cluster/cluster.tfresource "azurerm_resource_group" "aks-resource" { name = "kubernetes-resource-group" location = var.location}resource "azurerm_kubernetes_cluster" "aks-cluster" { **name = "terraform-cluster" location = azurerm_resource_group.aks-resource.location resource_group_name = azurerm_resource_group.aks-resource.name dns_prefix = "terrafo**rm-cluster" kubernetes_version = var.kubernetes_version default_node_pool { name = "default" node_count = 2 vm_size = "Standard_A2_v2" type = "VirtualMachineScaleSets" } identity { type = "SystemAssigned" } linux_profile { admin_username = var.admin_user ssh_key { key_data = var.ssh_key } } network_profile { network_plugin = "kubenet" load_balancer_sku = "Standard" }}

in the first part of the part of the configuration we define a resource group for our cluster and cleverly name it "kubernetes-resource-group", and give it a location which would come from a variable which is defined in variables.tf. The next part are the actual specs of our kubernetes cluster. First we tell Terraform we want an azure kubernetes cluster using resource "azurerm_kubernetes_cluster" , then we give our cluster a name , location and a resource group. We can use the location of the resource group we defined earlier by using the reference name aks-resource plus the value we want. In this case it's the location so we use aks-resource.location.

There are two more blocks that we need to pay attention too. The first being default_node_pool block and the second linux_profile.

default_node_pool block lets us define how many nodes we want to run and what type of virtual machines we want to run on our nodes. Its important you pick the right size for your nodes as this can affect cost and performance. You can take a look at what VM sizes azure offers and their use cases over here. node_count tells terraform how many nodes we want our cluster to have. Next we define the VM. Here I'm using and A series VM with 4 gigs of ram and two CPU cores. Lastly we give it a type of "virtual machine scale sets" which basically lets you create a group of auto scaling VM's

The last block we need to look at is linux_profile . This creates a user we can use to ssh into one of our nodes in case something goes wrong. Here we simply pass the block variables.

I intentionally didn't go over all the blocks because most times you don't need to change them and if you do the documentation is quite easy to go through.

Finally lets take a look at variables.tf as you might have guessed this is were we define all the variables we referenced earlier.

#variables.tfvariable "location" { type = string description = "resource location" default = "East US"}variable "kubernetes_version" { type = string description = "k8's version" default = "1.19.6"}variable "admin_user"{ type = string description = "username for linux_profile" default = "enderdragon"}variable "ssh_key" { description = "ssh_key for admin_user"}

to define a variable we use the variable keyword , give it a name and within the curly braces we define what type it is, in this case it's a string, an optional description and a default value, which is also optional.

Now we are almost ready to create our cluster but first we need to generate an ssh key , if you remember we created a variable for it earlier. if you have an ssh key pair you can skip this step

$ ssh-keygen -t rsa -b 4096

you can leave everything as default by pressing enter. Next we export the key into an environment variable.

$ export TF_VAR_ssh_key=$( cat ~/.ssh/id_rsa.pub)

Notice the TF_VAR prefix before the name of the actual variable name. This so Terraform is aware of the environment variable and can make use of it. You should also note that the variable name should correspond to the one in variables.tf.

Before we actually create our infrastructure its always a good idea to see what exactly Terraform would be creating luckily Terraform has a command for that

$ terraform plan

The output should look something like this

An execution plan has been generated and is shown below.Resource actions are indicated with the following symbols: + createTerraform will perform the following actions: # module.cluster.azurerm_kubernetes_cluster.aks-cluster will be created + resource "azurerm_kubernetes_cluster" "aks-cluster" { + dns_prefix = "terraform-cluster" + fqdn = (known after apply) + id = (known after apply) + kube_admin_config = (known after apply) + kube_admin_config_raw = (sensitive value) + kube_config = (known after apply) + kube_config_raw = (sensitive value) + kubelet_identity = (known after apply) + kubernetes_version = "1.19.1" + location = "eastus" + name = "terraform-cluster" + node_resource_group = (known after apply) + private_cluster_enabled = (known after apply) + private_fqdn = (known after apply) + private_link_enabled = (known after apply) + resource_group_name = "kubernetes-resource-group" + sku_tier = "Free" + addon_profile { + aci_connector_linux { + enabled = (known after apply) + subnet_name = (known after apply) } + azure_policy { + enabled = (known after apply) } + http_application_routing { + enabled = (known after apply) + http_application_routing_zone_name = (known after apply) } + kube_dashboard { + enabled = (known after apply) } + oms_agent { + enabled = (known after apply) + log_analytics_workspace_id = (known after apply) + oms_agent_identity = (known after apply) } } + auto_scaler_profile { + balance_similar_node_groups = (known after apply) + max_graceful_termination_sec = (known after apply) + scale_down_delay_after_add = (known after apply) + scale_down_delay_after_delete = (known after apply) + scale_down_delay_after_failure = (known after apply) + scale_down_unneeded = (known after apply) + scale_down_unready = (known after apply) + scale_down_utilization_threshold = (known after apply) + scan_interval = (known after apply) } + default_node_pool { + max_pods = (known after apply) + name = "default" + node_count = 2 + orchestrator_version = (known after apply) + os_disk_size_gb = (known after apply) + os_disk_type = "Managed" + type = "VirtualMachineScaleSets" + vm_size = "Standard_A2_v2" } + identity { + principal_id = (known after apply) + tenant_id = (known after apply) + type = "SystemAssigned" } + linux_profile { + admin_username = "enderdragon" + ssh_key { + key_data = "jsdksdnjcdkcdomocadcadpadmoOSNSINCDOICECDCWCdacwdcwcwccdscdfvevtbrbrtbevFCDSCSASACDCDACDCDCdsdsacdq$q@#qfesad== you@probablyyourdesktop" } } + network_profile { + dns_service_ip = (known after apply) + docker_bridge_cidr = (known after apply) + load_balancer_sku = "Standard" + network_plugin = "kubenet" + network_policy = (known after apply) + outbound_type = "loadBalancer" + pod_cidr = (known after apply) + service_cidr = (known after apply) + load_balancer_profile { + effective_outbound_ips = (known after apply) + idle_timeout_in_minutes = (known after apply) + managed_outbound_ip_count = (known after apply) + outbound_ip_address_ids = (known after apply) + outbound_ip_prefix_ids = (known after apply) + outbound_ports_allocated = (known after apply) } } + role_based_access_control { + enabled = (known after apply) + azure_active_directory { + admin_group_object_ids = (known after apply) + client_app_id = (known after apply) + managed = (known after apply) + server_app_id = (known after apply) + server_app_secret = (sensitive value) + tenant_id = (known after apply) } } + windows_profile { + admin_password = (sensitive value) + admin_username = (known after apply) } } # module.cluster.azurerm_resource_group.aks-resource will be created + resource "azurerm_resource_group" "aks-resource" { + id = (known after apply) + location = "eastus" + name = "kubernetes-resource-group" }Plan: 2 to add, 0 to change, 0 to destroy.------------------------------------------------------------------------Note: You didn't specify an "-out" parameter to save this plan, so Terraformcan't guarantee that exactly these actions will be performed if"terraform apply" is subsequently run.

If every thing looks good we can apply our configuration using:

$ terraform apply

Terraform will prompt you one last time to make sure you want to proceed enter yes and watch the magic happen. Once the resources have been provisioned head over to your azure dashboard a look. You should see something like this:

Provisioning an azure kubernetes cluster with Terraform (8)

As you can see, Terraform configured everything we needed to spin up a cluster, and we didn't have to specify everything. Click on terraform-cluster and lets make sure everything looks good.

Provisioning an azure kubernetes cluster with Terraform (9)

And there you have it, we deployed a kubernetes cluster with our desired specifications and Terraform all did the heavy lifting.

Once you are done it's as easy as running terraform destroy to tear down all the resources you have just provisioned.

Quick recap

You learnt :

  • Why Infrastructure as code is important is important
  • The basics of HCL(Hashicorp configuration language)
  • How to provision a kubernetes cluster with terraform

If you are wondering where to go from here. Here are somethings you can try.

  • Here we authenticated through the azure CLI but that's not completely ideal. Instead you might want to use a service principal with more streamlined permissions. Check that out over here
  • You should never store your state file in version control as that might contain sensitive information. Instead you can put it in an azure blob store .
  • There are better ways to pass variables to terraform which i did not cover here, but this post on the terraform website should walk you through it nicely.
  • Finally. This article couldn't possibly cover all there is to terraform, so i highly suggest taking a look at the Terraform documentation, it has some boilerplate configuration to get yo u started provisioning resources.

All the code samples used in this tutorial can be found here

Provisioning an azure kubernetes cluster with Terraform (2024)

FAQs

How to provision an AKS cluster using Terraform? ›

To try creating an AKS cluster with Terraform, we will use the 'named_cluster' submodule example provided on the registry page of the public AKS module.
  1. Create the main.tf file. Create a file called main.tf and paste the below: ...
  2. Set up Azure. Login to Azure from the command line: ...
  3. Run Terraform. ...
  4. Explore the Azure portal.

How do I provision resources in Azure using Terraform? ›

Azure Portal: Resource Group
  1. Create Terraform Configuration file(provider.tf)
  2. Set our Terraform Plan.
  3. Execute “Terraform apply” to run your Terraform Plan.
  4. You have access to Azure Resource Provisioning Group automatically via Terraform.

How to create multiple AKS clusters using Terraform? ›

  1. Link to Terraform Configuration Files.
  2. Step-01: Introduction.
  3. Step-02: Create SSH Public Key for Linux VMs.
  4. Step-03: Create 3 more Terraform Input Vairables to variables.tf.
  5. Step-04: Create a Terraform Datasource for getting latest Azure AKS Versions.
  6. Step-05: Create Azure Log Analytics Workspace Terraform Resource.

How long does it take to create an AKS cluster? ›

It takes a few minutes to create the AKS cluster. When your deployment is complete, navigate to your resource by selecting Go to resource, or by browsing to the AKS cluster resource group and selecting the AKS resource.

How do you make a k8s cluster in terraform? ›

Creating Your First Kubernetes Cluster with Terraform
  1. Step 1: Define Your Provider. Terraform relies on providers to interact with cloud services. ...
  2. Step 2: Define the Kubernetes Cluster. ...
  3. Step 3: Apply Your Configuration.
Feb 29, 2024

How do you deploy a Kubernetes cluster step by step? ›

Ensure network connectivity between nodes.
  1. Step 2: Install Docker (or Another Container Runtime) ...
  2. Step 3: Install kubeadm, kubelet, and kubectl. ...
  3. Step 4: Initialize the Master Node. ...
  4. Step 5: Set Up Cluster Networking. ...
  5. Step 6: Join Worker Nodes. ...
  6. Step 8: Deploy an Application. ...
  7. Step 9: Access Your Application.
Jan 1, 2024

How many nodes do you need for Kubernetes cluster? ›

The total number of nodes required for a cluster varies, depending on the organization's needs. However, as a basic and general guideline, have at least a dozen worker nodes and two master nodes for any cluster where availability is a priority.

What tool is used to quickly create Kubernetes clusters? ›

Terraform

For Kubernetes users, Terraform can create new clusters in any cloud based on consistent config files you version in a Git repository. Terraform can also be used to deploy workloads inside your cluster, such as from Kubernetes manifest files or Helm charts.

Should I have multiple Kubernetes clusters? ›

By distributing workloads across multiple Kubernetes clusters, you can achieve higher availability and redundancy, reducing the risk of a single point of failure. If one cluster goes down, the workloads can continue running on other clusters.

How much does it cost to run AKS cluster? ›

AKS pricing tier comparison
Azure Kubernetes Service
Automatic (preview)Standard
Pricing$0.10 per cluster per hourNo SLA (Free tier): Free (only pay for underlying resources) SLA (Standard tier): $0.10 per cluster per hour SLA and LTS (Premium tier): $0.60 per cluster per hour
5 more rows

How do you automate AKS cluster creation? ›

  1. Create a resource group. An Azure resource group is a logical group in which Azure resources are deployed and managed. ...
  2. Create an AKS Automatic cluster. To create an AKS Automatic cluster, use the az aks create command. ...
  3. Connect to the cluster. To manage a Kubernetes cluster, use the Kubernetes command-line client, kubectl.
Aug 1, 2024

How much does it cost to run a Kubernetes cluster? ›

The fee for running an EKS cluster is $0.10 per cluster per hour, which includes the operation of the Kubernetes control plane.

How do I connect to AKS cluster from local? ›

Connect to the cluster
  1. Configure kubectl to connect to your Kubernetes cluster using the az aks get-credentials command. This command downloads credentials and configures the Kubernetes CLI to use them. ...
  2. Verify the connection to your cluster using the kubectl get command. This command returns a list of the cluster nodes.
Aug 1, 2024

How do you administer a Kubernetes cluster? ›

Administer a Cluster
  1. Administration with kubeadm.
  2. Migrating from dockershim.
  3. Generate Certificates Manually.
  4. Manage Memory, CPU, and API Resources.
  5. Install a Network Policy Provider.
  6. Access Clusters Using the Kubernetes API.
  7. Advertise Extended Resources for a Node.
  8. Autoscale the DNS Service in a Cluster.
Jun 16, 2021

How can we access a Kubernetes cluster via API? ›

Directly accessing the REST API
  1. Run kubectl in proxy mode. Recommended approach. Uses stored apiserver location. ...
  2. Provide the location and credentials directly to the http client. Alternate approach. Works with some types of client code that are confused by using a proxy.
Jan 1, 2024

How do I give AKS cluster access to ACR? ›

Create a new AKS cluster and integrate with an existing ACR using the az aks create command with the --attach-acr parameter. This command allows you to authorize an existing ACR in your subscription and configures the appropriate AcrPull role for the managed identity. This command may take several minutes to complete.

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Reed Wilderman

Last Updated:

Views: 6208

Rating: 4.1 / 5 (52 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Reed Wilderman

Birthday: 1992-06-14

Address: 998 Estell Village, Lake Oscarberg, SD 48713-6877

Phone: +21813267449721

Job: Technology Engineer

Hobby: Swimming, Do it yourself, Beekeeping, Lapidary, Cosplaying, Hiking, Graffiti

Introduction: My name is Reed Wilderman, I am a faithful, bright, lucky, adventurous, lively, rich, vast person who loves writing and wants to share my knowledge and understanding with you.