To streamline and automate the deployment of an Azure Kubernetes Service (AKS) cluster within a pre-existing subnet and ensure a successful installation of Jira, you need to employ a specific set of tools and technologies.

By the end of this blog post, you will be able to spin up a Jira instance running on an AKS cluster that is built in a pre-existing subnet in Azure.

Tools and tech stack used

Each tool plays a crucial role in managing different aspects of the deployment process:

  1. Bash, Terraform CLI, Helm CLI, Azure CLI, Visual Studio Code
    Bash automates tasks; Terraform CLI provisions infrastructure; Helm CLI deploys Jira on Kubernetes; Azure CLI manages Azure resources; and Visual Studio Code aids in editing scripts and charts.
  2. Terraform
    Provisions and configures infrastructure.
  3. Azure
    Provides the cloud resources.
  4. Helm
    Deploys and manages Jira on Kubernetes.
  5. Jira
    Provides project management and issue-tracking functionalities.

What is the desired state?

Setting up an Azure Kubernetes Service (AKS) cluster is pretty straightforward. It can be done with just a few lines of commands in Azure CLI. But as soon as you have some configuration to do, like setting up the network configuration or using different managed users, it gets complicated.

In this real-life scenario, we build an AKS cluster in Azure with a database to run Jira. The subnets and the managed user, however, already exist. It is highly possible that you will be confronted with such a scenario since your team or customer will probably have an infrastructure with a specific IP range and more requirements that the AKS cluster should be integrated into.

Technically, the integration of the AKS into an existing virtual network (vnet) means rearranging the order of creating components and setting specific permissions to the managed user in advance to make the Terraform script run successfully.

Our goal is to automate as much as possible, so we deploy and manage our infrastructure as Infrastructure as Code (IaC), which helps us keep the live environment and match our configuration to avoid drift. Therefore, we create all infrastructure components using Terraform.

Architecture

The components of the Resource Group rg-jira-fw01 on the left are pre-existing components.

To organize our resources and manage access, we have three resource groups. One for the AKS, one for the Application Gateway, and another for its components. The Terraform script deploys a node pool with two nodes in the pre-existing AKS subnet, an MSSQL-Server with two databases (Jira and EazyBI), and an Application Gateway with a public IP used as an ingress to terminate incoming SSL connections. The Application Gateway is deployed in the AppGW subnet.

The graph below also shows the components for the Terraform state file (tfstate).

aks_architecture

Repository

The whole project can be cloned from https://github.com/eficode/aks2jira.

Set up environment

For a smooth setup, follow these steps to prepare your environment and store your Terraform state file remotely.

Tools/CLI

We're going to skip the installation of the CLIs (see Tools and tech stack used) as there are many resources on how to install it according to your operating system.

Store tfstate remotely

It is recommended that your Terraform state be stored remotely. Having it locally will not cause any issues when you’re working alone, but once there is a second developer, you should store it remotely (see Terraform docs).

Therefore, we need to:

  1. Create a Resource Group.
  2. Create a Storage Account.
  3. Create a Blob Container.
  4. Set environment variable (to make Terraform pick the storage account).
  5. Configure Terraform.

All these steps are achieved by running the bash script ./scripts/create_tfstate_storage.ps1|sh once you’re logged into your Azure account.

Create a Terraform workspace

We use workspaces to create different environments, such as dev, int, and prod. The names of these workspaces will be used later by Terraform to define component names in Azure, e.g., resource group, etc.

To create a workspace, run:

terraform workspace new dev

Configure Terraform script

In the variables.tf, we set the managed user ID name created beforehand and the pod CIDR according to the customer’s predefined IP range. Since we have a new environment we also need to extend the flag for the vm size of it.

user_managed_identity = "Jira-uid-${terraform.workspace}"
aks_pod_cidr = "10.210.2.0/24"
aks_vmsize = {
  dev = "Standard_B4as_v2"
} 

Using data blocks in the network_aks module, we can fetch the existing managed user ID object and both subnets for the AKS and the Application Gateway that are in the vnet called vnet-jira:

# Fetch existing User Managed Identity
data "azurerm_user_assigned_identity" "aks_identity" {
      resource_group_name = "rg-jira-net"
      name = local.user_managed_identity
}

# Fetch existing Subnet for AKS
data "azurerm_subnet" "aks_subnet" {
      name = "aks-subnet"
      virtual_network_name = "vnet-jira"
      resource_group_name = "rg-jira-net"
}

# Fetch existing Subnet for AppGW
data "azurerm_subnet" "appgw_subnet" {
      name = "appgw-subnet"
      virtual_network_name = "vnet-jira"
      resource_group_name = "rg-jira-net"
}
 

You can use the fetched variables in your Terraform script like so:

aks_subnet_id = module.network_aks.aks_subnet_id
appgw_subnet_id = module.network_aks.appgw_subnet_id
user_managed_identity_id
data.azurerm_user_assigned_identity.aks_identity.principal_id

Create the AKS cluster

The main changes are based on the file terraform/modules/aks/main.tf. The AKS resource contains all the necessary information to create the cluster with the desired settings. Since we have fetched the existing AKS subnet, its ID must be provided.

# Create AKS Cluster
resource "azurerm_kubernetes_cluster" "aks" {
    #...
    default_node_pool {
        #...
        vnet_subnet_id = var.aks_subnet_id # Existing Subnet ID here
  }

  identity {
      type = "UserAssigned"
      identity_ids = [var.user_managed_identity] # Existing Managed User ID here
    }
}

Set permissions

Since the network infrastructure is not created by the AKS cluster itself, some of the permissions need to be set:

  1. The Ingress User of the Application Gateway needs to be a contributor on the Application Gateway.
  2. The Ingress User of the Application Gateway needs to be a contributor on the resource group containing the Application Gateway.
  3. The Ingress User of the Application Gateway needs to be a network contributor on the AKS subnet.
  4. The Ingress User of the Application Gateway needs to be a managed identity operator on the managed user.
  5. The Ingress User of the Application Gateway needs to be a managed contributor to the managed user.
# Set Permissions
resource "azurerm_role_assignment" "ingressuser_appgw" {
    scope = azurerm_application_gateway.appgw.id
    role_definition_name = "Contributor"
                                                                            principal_id
azurerm_kubernetes_cluster.aks.ingress_application_gateway[0].ingress_application_gateway_identity[0].object_id
}

resource "azurerm_role_assignment" "ingressuser_rg-cust" {
    scope = var.rg_cust_id
    role_definition_name = "Contributor"
                                                                            principal_id
azurerm_kubernetes_cluster.aks.ingress_application_gateway[0].ingress_application_gateway_identity[0].object_id
}

resource "azurerm_role_assignment" "ingressuser_subnet-aks" {
    scope = var.aks_subnet_id
    role_definition_name = "Network Contributor"
                                                                            principal_id
azurerm_kubernetes_cluster.aks.ingress_application_gateway[0].ingress_application_gateway_identity[0].object_id
}

resource "azurerm_role_assignment" "ingressuser_jirauser" {
    scope = var.user_managed_identity
    role_definition_name = "Managed Identity Operator"
                                                                            principal_id
azurerm_kubernetes_cluster.aks.ingress_application_gateway[0].ingress_application_gateway_identity[0].object_id
}

resource "azurerm_role_assignment" "ingressuser_vnet" {
    scope = var.user_managed_identity
    role_definition_name = "Network Contributor"
                                                                            principal_id
azurerm_kubernetes_cluster.aks.ingress_application_gateway[0].ingress_application_gateway_identity[0].object_id
}


Create Application Gateway

After creating the frontend public IP, you need to create the Application Gateway providing the frontend IP configuration.

# Create Application Gateway
resource "azurerm_application_gateway" "appgw" {
  #...
  identity {
    type = "UserAssigned"
    identity_ids = [var.user_managed_identity]
  }  

  gateway_ip_configuration {
    name = "appgw-ip-config"
    subnet_id = var.appgw_subnet_id
  }

  frontend_ip_configuration {
    name = azurerm_public_ip.appgw-pip.name
    public_ip_address_id = azurerm_public_ip.appgw-pip.id
  }

  depends_on = [ azurerm_public_ip.appgw-pip ]
}

Create infrastructure

You can create the infrastructure by running these Terraform commands (we will skip explaining Terraform commands here).

terraform init
terraform plan
terraform apply

Install Jira

After the infrastructure has been created, we will install the Jira application via the official Helm Chart. Since we have our own settings, we can provide a values file to it.

Change hosts file

If you don’t have an owned domain yet, you can test any domain by setting the Public IP (of the Application Gateway) along with any domain, e.g., mydomain.com, in your hosts file.

Windows: C:\Windows\System32\Drivers\etc\hosts
Linux: /etc/hosts

<ip-of-appgw> jira.mydomain.com

Change Helm values file

The application level setup of Jira is configured in ./terraform/modules/jira/values-jira.yaml, refer to Atlassian documentation for any details, but below follows details on some particularly important settings.

replicaCount: 1
#This represents how many JIRA nodes/pods should be deployed. No more then 1 should be added/removed until the Jira cluster has stabilized.

image:tag: "9.5.1"
#This refers to the Docker image tag to deploy, this in turn corresponds to a Jira version. Valid tags can be found here on docker hub, search for atlassian/jira-software.

database.url:
# This should be empty during the initial setup of a Jira with a blank DB. It should later be updated with the private-link dns name plus the rest of the rest of the Azure supplied jdbc minus username and password
# Example: # jdbc:sqlserver://jira-dbserver-dev.privatelink.database.windows.net:1433;database=jira-db-jira-dev;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;

database.credentials.secretName: jiradb-secret
#This refers to the name of a secret created by terraform, containing username and password for the database.

volumes.localHome.storageClassName: managed-csi
#This refers to the type of storage to request from Azure when setting up Jira local home for a pod.

volumes.sharedHome.storageClassName: azurefile-csi-premium-retain
#This refers to the type of storage to request from Azure when setting up Jira shared home for all pods.

ingress.className: azure-application-gateway
#Request that our ingress uses Azure AppGw

ingress.nginx: false
#Explicitly tell the helm chart not to use nginx instead of AppGW

ingress.host: jira.mydomain.com
#This must correspond to the DNS name where jira is expected to be reached.

jira.resources.jvm.maxHeap: <gb>
jira.resources.jvm.minHeap: <gb>
#The max and minimum Heap memory allowed to be used by a single JIRA pod. Set this according to the size the VM.

jira.resources.container.requests.cpu<cpu>
jira.resources.container.requests.memory<mem>
#The Maximum requested CPU and memory resources requested by a JIRA pod. Set this according to the size the VM. Memory should leave some headroom above maxHeap.

jira.forceConfigUpdate: true
#This makes sure that any changes in the value file gets applied to pods, making the values file the source of truth and not the local settings on an ephemeral pod.

annotations: {
  appgw.ingress.kubernetes.io/cookie-based-affinity: "true",
  appgw.ingress.kubernetes.io/ssl-redirect: "true",
  appgw.ingress.kubernetes.io/backend-protocol: "http",
  appgw.ingress.kubernetes.io/appgw-ssl-certificate: "SSLCert"
}
# cookie-based-affinity: user requests directed to the same server
# ssl-redirect: redirect http to https
# backend-protocol: since the loadbalancer terminates SSL, the backends listen to http
# ssl-certificate: the certificate is stored with the key SSLCert by the Terraform k8s module
# appgw-ssl-certificate: Key that is used to store certificate in Azure.

https: true
#Use https

additionalHosts: #add this if you do not use a public domain and want to test locally
  - ip: "<ip-of-appgw>"
    hostnames:
    - "jira.mydomain.com"


Install Jira with Helm

To install the Helm Chart with the changed values file, run the following commands:

helm repo add atlassian-data-center 
https://atlassian.github.io/data-center-helm-charts

helm upgrade --install jira atlassian-data-center/jira --values 
./terraform/modules/jira/values-jira.yaml --namespace jira

This will add the official Atlassian repository locally and install the application with the settings from your values files in a namespace called Jira.

Application architecture

The overview from the application perspective is as follows:

AKS

By following this guide, you should now be equipped to handle similar deployments and manage your Infrastructure as Code with confidence.

Published: Aug 20, 2024

DevOpsJiraAzure AKS