Install Gluu Flex on AKS#
System Requirements#
{% include "includes/cn-system-requirements.md" %}
Initial Setup#
-
Before initiating the setup, please obtain an SSA for Flex trial, after which you will issued a JWT.
-
Install Azure CLI
-
Create a Resource Group
az group create --name gluu-resource-group --location eastus
-
Create an AKS cluster such as the following example:
You can adjustaz aks create -g gluu-resource-group -n gluu-cluster --enable-managed-identity --node-vm-size NODE_TYPE --node-count 2 --enable-addons monitoring --enable-msi-auth-for-monitoring --generate-ssh-keys
node-count
andnode-vm-size
as per your desired cluster size -
Connect to the cluster
az aks install-cli az aks get-credentials --resource-group gluu-resource-group --name gluu-cluster
-
Install Helm3
-
Create
gluu
namespace where our resources will residekubectl create namespace gluu
Gluu Flex Installation using Helm#
-
Install Nginx-Ingress, if you are not using Istio ingress
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo add stable https://charts.helm.sh/stable helm repo update helm install nginx ingress-nginx/ingress-nginx
-
Create a file named
override.yaml
and add changes as per your desired configuration:-
FQDN/domain is not registered:
Get the Loadbalancer IP:
kubectl get svc nginx-ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
Add the following yaml snippet to your
override.yaml
file:global: lbIp: #Add the Loadbalance IP from the previous command isFqdnRegistered: false
-
FQDN/domain is registered:
Add the following yaml snippet to your
override.yaml
file:global: lbIp: #Add the LoadBalancer IP from the previous command isFqdnRegistered: true fqdn: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu nginx-ingress: ingress: path: / hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu tls: - secretName: tls-certificate hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
-
LDAP/Opendj for persistence storage
Prepare cert and key for OpenDJ, for example:
openssl req -x509 -newkey rsa:2048 -sha256 -days 365 -nodes -keyout opendj.key -out opendj.crt -subj '/CN=demoexample.gluu.org' -addext 'subjectAltName=DNS:ldap,DNS:opendj'
Extract the contents of OpenDJ cert and key files as base64 string:
OPENDJ_CERT_B64=$(base64 opendj.crt -w0) OPENDJ_KEY_B64=$(base64 opendj.key -w0)
Add the following yaml snippet to your
override.yaml
file:global: cnPersistenceType: ldap storageClass: provisioner: disk.csi.azure.com opendj: enabled: true config: configmap: # -- contents of OpenDJ cert file in base64-string cnLdapCrt: <OPENDJ_CERT_B64> # -- contents of OpenDJ key file in base64-string cnLdapKey: <OPENDJ_KEY_B64>
So if your desired configuration has no-FQDN and LDAP, the final
override.yaml
file will look something like that:global: cnPersistenceType: ldap lbIp: #Add the Loadbalancer IP from the previous command isFqdnRegistered: false storageClass: provisioner: disk.csi.azure.com opendj: enabled: true config: configmap: # -- contents of OpenDJ cert file in base64-string cnLdapCrt: <OPENDJ_CERT_B64> # -- contents of OpenDJ key file in base64-string cnLdapKey: <OPENDJ_KEY_B64> nginx-ingress: ingress: path: / hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Flex tls: - secretName: tls-certificate hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Flex
-
Couchbase for pesistence storage
Add the following yaml snippet to your
override.yaml
file:global: cnPersistenceType: couchbase config: configmap: # The prefix of couchbase buckets. This helps with separation in between different environments and allows for the same couchbase cluster to be used by different setups of Janssen. cnCouchbaseBucketPrefix: jans # -- Couchbase certificate authority string. This must be encoded using base64. This can also be found in your couchbase UI Security > Root Certificate. In mTLS setups this is not required. cnCouchbaseCrt: SWFtTm90YVNlcnZpY2VBY2NvdW50Q2hhbmdlTWV0b09uZQo= # -- The number of replicas per index created. Please note that the number of index nodes must be one greater than the number of index replicas. That means if your couchbase cluster only has 2 index nodes you cannot place the number of replicas to be higher than 1. cnCouchbaseIndexNumReplica: 0 # -- Couchbase password for the restricted user config.configmap.cnCouchbaseUser that is often used inside the services. The password must contain one digit, one uppercase letter, one lower case letter and one symbol cnCouchbasePassword: P@ssw0rd # -- The Couchbase super user (admin) username. This user is used during initialization only. cnCouchbaseSuperUser: admin # -- Couchbase password for the superuser config.configmap.cnCouchbaseSuperUser that is used during the initialization process. The password must contain one digit, one uppercase letter, one lower case letter and one symbol cnCouchbaseSuperUserPassword: Test1234# # -- Couchbase URL. This should be in FQDN format for either remote or local Couchbase clusters. The address can be an internal address inside the kubernetes cluster cnCouchbaseUrl: cbjanssen.default.svc.cluster.local # -- Couchbase restricted user cnCouchbaseUser: janssen
-
PostgreSQL for persistence storage
In a production environment, a production grade PostgreSQL server should be used such as
Azure Database for PostgreSQL
For testing purposes, you can deploy it on the AKS cluster using the following command:
helm install my-release --set auth.postgresPassword=Test1234#,auth.database=gluu -n gluu oci://registry-1.docker.io/bitnamicharts/postgresql
Add the following yaml snippet to your
override.yaml
file:global: cnPersistenceType: sql config: configmap: cnSqlDbName: gluu cnSqlDbPort: 5432 cnSqlDbDialect: pgsql cnSqlDbHost: my-release-postgresql.gluu.svc cnSqlDbUser: postgres cnSqlDbTimezone: UTC cnSqldbUserPassword: Test1234#
-
MySQL for persistence storage
In a production environment, a production grade MySQL server should be used such as
Azure Database for MySQL
For testing purposes, you can deploy it on the AKS cluster using the following command:
helm install my-release --set auth.rootPassword=Test1234#,auth.database=gluu -n gluu oci://registry-1.docker.io/bitnamicharts/mysql
Add the following yaml snippet to your
override.yaml
file:global: cnPersistenceType: sql config: configmap: cnSqlDbName: gluu cnSqlDbPort: 3306 cnSqlDbDialect: mysql cnSqlDbHost: my-release-mysql.gluu.svc cnSqlDbUser: root cnSqlDbTimezone: UTC cnSqldbUserPassword: Test1234#
So if your desired configuration has FQDN and MySQL, the final
override.yaml
file will look something like that:global: cnPersistenceType: sql lbIp: "" #Add the LoadBalancer IP from previous command isFqdnRegistered: true fqdn: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu nginx-ingress: ingress: path: / hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu tls: - secretName: tls-certificate hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu config: configmap: cnSqlDbName: gluu cnSqlDbPort: 3306 cnSqlDbDialect: mysql cnSqlDbHost: my-release-mysql.gluu.svc cnSqlDbUser: root cnSqlDbTimezone: UTC cnSqldbUserPassword: Test1234#
-
-
Install Gluu Flex
After finishing all the tweaks to the
override.yaml
file, we can use it to install gluu flex.helm repo add gluu-flex https://docs.gluu.org/charts helm repo update helm install gluu gluu-flex/gluu -n gluu -f override.yaml
Configure Gluu Flex#
You can use the Janssen TUI to configure Flex components. The TUI calls the Config API to perform ad hoc configuration.
Created: 2022-09-22