Install Gluu Flex on EKS#
System Requirements#
The resources may be set to the minimum as below:
- 8 GiB RAM
- 8 CPU cores
- 50GB hard-disk
Use the listing below for detailed estimation of minimum required resources. Table contains the default resources recommendations per service. Depending on the use of each service the resources needs may be increase or decrease.
Service | CPU Unit | RAM | Disk Space | Processor Type | Required |
---|---|---|---|---|---|
Auth server | 2.5 | 2.5GB | N/A | 64 Bit | Yes |
LDAP (OpenDJ) | 1.5 | 2GB | 10GB | 64 Bit | Only if couchbase is not installed |
fido2 | 0.5 | 0.5GB | N/A | 64 Bit | No |
scim | 1.0 | 1.0GB | N/A | 64 Bit | No |
config - job | 0.5 | 0.5GB | N/A | 64 Bit | Yes on fresh installs |
persistence - job | 0.5 | 0.5GB | N/A | 64 Bit | Yes on fresh installs |
nginx | 1 | 1GB | N/A | 64 Bit | Yes if not ALB |
auth-key-rotation | 0.3 | 0.3GB | N/A | 64 Bit | No [Strongly recommended] |
config-api | 1 | 1GB | N/A | 64 Bit | No |
casa | 1 | 1GB | N/A | 64 Bit | No |
admin-ui | 2 | 2GB | N/A | 64 Bit | No |
Releases of images are in style 1.0.0-beta.0, 1.0.0-0
Initial Setup#
-
Before initiating the setup please obtain an SSA to trial Flex, after which you are issued a JWT. You need to convert it into base64 format that you can use to install, specified by the
.global.licenseSsa
key in thevalues.yaml
of Gluu's Chart. -
Install aws cli
-
Configure your AWS user account using aws configure command. This makes you able to authenticate before creating the cluster. Note that this user account must have permissions to work with Amazon EKS IAM roles and service linked roles, AWS CloudFormation, and a VPC and related resources
-
Install kubectl
-
Install eksctl
-
Create cluster using eksctl such as the following example:
You can adjusteksctl create cluster --name gluu-cluster --nodegroup-name gluu-nodes --node-type NODE_TYPE --nodes 2 --managed --region REGION_CODE
node-type
andnodes
number as per your desired cluster size -
To be able to attach volumes to your pod, you need to install the Amazon EBS CSI driver
-
Install Helm3
-
Create
gluu
namespace where our resources will residekubectl create namespace gluu
Gluu Flex Installation using Helm#
-
Install Nginx-Ingress, if you are not using Istio ingress
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo add stable https://charts.helm.sh/stable helm repo update helm install nginx ingress-nginx/ingress-nginx
-
Create a file named
override.yaml
and add changes as per your desired configuration:-
FQDN/domain is not registered:
Get the Loadbalancer address:
kubectl get svc nginx-ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].hostname}'
Add the following yaml snippet to your
override.yaml
file:global: isFqdnRegistered: false config: configmap: lbAddr: http:// #Add LB address from previous command
-
FQDN/domain is registered:
Add the following yaml snippet to your
override.yaml
file`:global: isFqdnRegistered: true fqdn: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu config: configmap: lbAddr: http:// #Add LB address from previous command nginx: ingress: enabled: true path: / hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu tls: - secretName: tls-certificate hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
-
LDAP/Opendj for persistence storage
Add the following yaml snippet to your
override.yaml
file:global: cnPersistenceType: ldap storageClass: provisioner: kubernetes.io/aws-ebs opendj: enabled: true
So if your desired configuration has no-FQDN and LDAP, the final
override.yaml
file will look something like that:global: cnPersistenceType: ldap isFqdnRegistered: false storageClass: provisioner: kubernetes.io/aws-ebs opendj: enabled: true config: configmap: lbAddr: http:// #Add LB address from previous command nginx-ingress: ingress: path: / hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu tls: - secretName: tls-certificate hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu
-
Couchbase for pesistence storage
Add the following yaml snippet to your
override.yaml
file:global: cnPersistenceType: couchbase config: configmap: # The prefix of couchbase buckets. This helps with separation in between different environments and allows for the same couchbase cluster to be used by different setups of Janssen. cnCouchbaseBucketPrefix: jans # -- Couchbase certificate authority string. This must be encoded using base64. This can also be found in your couchbase UI Security > Root Certificate. In mTLS setups this is not required. cnCouchbaseCrt: SWFtTm90YVNlcnZpY2VBY2NvdW50Q2hhbmdlTWV0b09uZQo= # -- The number of replicas per index created. Please note that the number of index nodes must be one greater than the number of index replicas. That means if your couchbase cluster only has 2 index nodes you cannot place the number of replicas to be higher than 1. cnCouchbaseIndexNumReplica: 0 # -- Couchbase password for the restricted user config.configmap.cnCouchbaseUser that is often used inside the services. The password must contain one digit, one uppercase letter, one lower case letter and one symbol cnCouchbasePassword: P@ssw0rd # -- The Couchbase super user (admin) username. This user is used during initialization only. cnCouchbaseSuperUser: admin # -- Couchbase password for the superuser config.configmap.cnCouchbaseSuperUser that is used during the initialization process. The password must contain one digit, one uppercase letter, one lower case letter and one symbol cnCouchbaseSuperUserPassword: Test1234# # -- Couchbase URL. This should be in FQDN format for either remote or local Couchbase clusters. The address can be an internal address inside the kubernetes cluster cnCouchbaseUrl: cbjanssen.default.svc.cluster.local # -- Couchbase restricted user cnCouchbaseUser: janssen
-
PostgreSQL for persistence storage
In a production environment, a production grade PostgreSQL server should be used such as
Amazon RDS
For testing purposes, you can deploy it on the EKS cluster using the following command:
helm install my-release --set auth.postgresPassword=Test1234#,auth.database=gluu -n gluu oci://registry-1.docker.io/bitnamicharts/postgresql
Add the following yaml snippet to your
override.yaml
file:global: cnPersistenceType: sql config: configmap: cnSqlDbName: gluu cnSqlDbPort: 5432 cnSqlDbDialect: pgsql cnSqlDbHost: my-release-mysql.gluu.svc cnSqlDbUser: postgres cnSqlDbTimezone: UTC cnSqldbUserPassword: Test1234#
-
MySQL for persistence storage
In a production environment, a production grade MySQL server should be used such as
Amazon RDS
For testing purposes, you can deploy it on the EKS cluster using the following command:
helm install my-release --set auth.rootPassword=Test1234#,auth.database=gluu -n gluu oci://registry-1.docker.io/bitnamicharts/mysql
Add the following yaml snippet to your
override.yaml
file:global: cnPersistenceType: sql config: configmap: cnSqlDbName: gluu cnSqlDbPort: 3306 cnSqlDbDialect: mysql cnSqlDbHost: my-release-mysql.gluu.svc cnSqlDbUser: root cnSqlDbTimezone: UTC cnSqldbUserPassword: Test1234#
So if your desired configuration has FQDN and MySQL, the final
override.yaml
file will look something like that:global: cnPersistenceType: sql isFqdnRegistered: true fqdn: demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu nginx-ingress: ingress: path: / hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu tls: - secretName: tls-certificate hosts: - demoexample.gluu.org #CHANGE-THIS to the FQDN used for Gluu config: configmap: lbAddr: http:// #Add LB address from previous command cnSqlDbName: gluu cnSqlDbPort: 3306 cnSqlDbDialect: mysql cnSqlDbHost: my-release-mysql.gluu.svc cnSqlDbUser: root cnSqlDbTimezone: UTC cnSqldbUserPassword: Test1234#
-
-
Install Gluu Flex
After finishing all the tweaks to the
override.yaml
file, we can use it to install gluu flex.helm repo add gluu-flex https://docs.gluu.org/charts helm repo update helm install gluu gluu-flex/gluu -n gluu -f override.yaml
Configure Gluu Flex#
You can use the Janssen TUI to configure Flex components. The TUI calls the Config API to perform ad hoc configuration.
Created: 2022-09-22