Skip to main content
Loading
Version: Operator 3.4.0

Install the Aerospike Kubernetes Operator Using Helm

Overviewโ€‹

This page describes how to use Helm charts to install the Aerospike Kubernetes Operator (AKO).

Helm charts are groups of YAML files that describe Kubernetes resources and their current configurations. If you plan to use Helm charts to deploy Aerospike clusters, you also need to use Helm to install the AKO on your Kubernetes deployment.

Requirementsโ€‹

You first need an existing Kubernetes cluster with kubectl configured to use that cluster. See the Requirements page for Kubernetes version and other requirements.

Before installing AKO, you must install cert-manager. AKO uses admission webhooks, which needs TLS certificates issued by cert-manager. Follow the official cert-manager instructions to install cert-manager on your Kubernetes cluster.

note

In Kubernetes version 1.23 or later, Pod Security Admission (PSA) is enabled by default. Make sure the namespace where the AKO is installed has either baseline or privileged Pod Security Standard level set. The restricted level is not supported by Aerospike. The default Pod Security Standard level in Kubernetes 1.23 is privileged. For more details, see Apply Pod Security Standards

1. Get the Helm chartsโ€‹

To get the AKO Helm chart, add the Helm repository:

helm repo add aerospike https://aerospike.github.io/aerospike-kubernetes-enterprise

If the Helm repository is already added, update the index:

helm repo update

2. Deploy AKOโ€‹

Run the following command to deploy AKO:

helm install aerospike-kubernetes-operator aerospike/aerospike-kubernetes-operator --version=3.4.0 --set watchNamespaces="aerospike"

3. Check AKO logsโ€‹

AKO runs as two replicas by default, for higher availability. Run the following command to follow the logs for the AKO pods.

kubectl -n <release-namespace> logs -f deployment/aerospike-kubernetes-operator manager

Output:

2023-08-01T09:07:03Z    INFO    setup   Init aerospike-server config schemas
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.3.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.2"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.6.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.2.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.4.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.7.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.7.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.1"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.6.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.2.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.4.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.3.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.2.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.3"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.5.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.3.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.3.1"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.4.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.8.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.9.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "7.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "7.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "7.2.0"}
2023-08-01T09:07:03Z INFO aerospikecluster-resource Registering mutating webhook to the webhook server
2023-08-01T09:07:03Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-asdb-aerospike-com-v1-aerospikecluster"}
2023-08-01T09:07:03Z INFO controller-runtime.builder skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called {"GVK": "asdb.aerospike.com/v1, Kind=AerospikeCluster"}
2023-08-01T09:07:03Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "asdb.aerospike.com/v1, Kind=AerospikeCluster", "path": "/validate-asdb-aerospike-com-v1-aerospikecluster"}
2023-08-01T09:07:03Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-asdb-aerospike-com-v1-aerospikecluster"}
2023-08-01T09:07:03Z INFO setup Starting manager
2023-08-01T09:07:03Z INFO controller-runtime.webhook.webhooks Starting webhook server
2023-08-01T09:07:03Z INFO Starting server {"path": "/metrics", "kind": "metrics", "addr": "127.0.0.1:8080"}
2023-08-01T09:07:03Z INFO controller-runtime.certwatcher Updated current TLS certificate
2023-08-01T09:07:03Z INFO Starting server {"kind": "health probe", "addr": "[::]:8081"}
I0801 09:07:03.213295 1 leaderelection.go:248] attempting to acquire leader lease operators/96242fdf.aerospike.com...
2023-08-01T09:07:03Z INFO controller-runtime.webhook Serving webhook server {"host": "", "port": 9443}
2023-08-01T09:07:03Z INFO controller-runtime.certwatcher Starting certificate watcher

4. Grant permissions to the target namespacesโ€‹

AKO is installed in the <release-namespace> namespace. Grant additional permissions (by configuring ServiceAccounts and RoleBindings/ClusterRoleBindings) for the target Kubernetes namespace where the Aerospike clusters are created.

There are two ways to grant permission for the target namespaces:

  1. Using kubectl
  2. Using akoctl plugin

Using kubectlโ€‹

The procedure to use the namespace aerospike is as follows:

Create the namespaceโ€‹

Create the Kubernetes namespace if not already created:

kubectl create namespace aerospike

Create a service accountโ€‹

kubectl -n aerospike create serviceaccount aerospike-operator-controller-manager

Create RoleBinding/ClusterRoleBinding for Aerospike clusterโ€‹

Next, create a RoleBinding or ClusterRoleBinding as per requirement to attach this service account to ClusterRole aerospike-cluster. This ClusterRole is created as part of AKO installation and grants Aerospike cluster permission to service account.

  • For using Kubernetes native Pod only network to connect to Aerospike cluster create RoleBinding:
kubectl -n aerospike create rolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager
  • For connecting to Aerospike cluster from outside Kubernetes create ClusterRoleBinding:
kubectl create clusterrolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager
tip

For attaching multiple service accounts of different namespaces in one go, add multiple --serviceaccount params in above command

Example: To attach service accounts of aerospike and aerospike1 namespace
kubectl create clusterrolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager --serviceaccount=aerospike1:aerospike-operator-controller-manager

If the required ClusterRoleBinding already exists in cluster, edit it to attach new service account:

kubectl edit clusterrolebinding aerospike-cluster

This command launches an editor. Append the following lines to the subjects section:

  # A new entry for aerospike.
# Replace aerospike with your namespace
- kind: ServiceAccount
name: aerospike-operator-controller-manager
namespace: aerospike

Save and ensure that the changes are applied.

Using akoctl pluginโ€‹

For instructions on installing the akoctl plugin, see akoctl installation.

The procedure to use the namespace aerospike is as follows:

  • For using Kubernetes native Pod only network to connect to Aerospike cluster grant namespace scope permission:
kubectl akoctl auth create -n aerospike --cluster-scope=false
  • For connecting to Aerospike cluster from outside Kubernetes grant cluster scope permission:
kubectl akoctl auth create -n aerospike
tip

For granting permission of multiple namespaces in one go, specify comma separated namespace list in -n param

Example: To grant permission for aerospike and aerospike1 namespace
kubectl akoctl auth create -n aerospike,aerospike1

Configurationsโ€‹

NameDescriptionDefault
replicasNumber of AKO replicas2
operatorImage.repositoryAKO image repositoryaerospike/aerospike-kubernetes-operator
operatorImage.tagAKO image tag3.4.0
operatorImage.pullPolicyImage pull policyIfNotPresent
imagePullSecretsSecrets containing credentials to pull AKO image from a private registry{} (nil)
rbac.createSet this to true to let helm chart automatically create RBAC resources necessary for AKOtrue
rbac.serviceAccountNameIf rbac.create=false, provide a service account name to be used with the AKO deploymentdefault
healthPortHealth port8081
metricsPortMetrics port8080
certs.createSet this to true to let Helm chart automatically create certificates using cert-managertrue
certs.webhookServerCertSecretNameKubernetes secret name that contains webhook server certificateswebhook-server-cert
watchNamespacesNamespaces to watch. AKO watches for AerospikeCluster custom resources in these namespaces.default
aerospikeKubernetesInitRegistryRegistry used to pull aerospike-init imagedocker.io
resourcesResource requests and limits for the AKO podsrequests.cpu: 10m, requests.memory: 64Mi , limits.cpu: 200m, limits.memory: 256Mi
affinityAffinity rules for the AKO deployment{} (nil)
extraEnvExtra environment variables that are passed into the AKO pods{} (nil)
nodeSelectorNode selectors for scheduling the AKO pods based on node labels{} (nil)
tolerationsTolerations for scheduling the AKO pods based on node taints{} (nil)
annotationsAnnotations for the AKO deployment{} (nil)
labelsLabels for the AKO deployment{} (nil)
podAnnotationsAnnotations for the AKO pods{} (nil)
podLabelsLabels for the AKO pods{} (nil)
metricsService.labelsLabels for the AKO metrics service{} (nil)
metricsService.annotationsAnnotations for the AKO metrics service{} (nil)
metricsService.portThe AKO metrics service port8443
metricsService.typeThe AKO metrics service typeClusterIP
webhookService.labelsLabels for the AKO webhook service{} (nil)
webhookService.annotationsAnnotations for the AKO webhook service{} (nil)
webhookService.portThe AKO webhook service port443
webhookService.targetPortThe AKO webhook target port9443
webhookService.typeThe AKO webhook service typeClusterIP
podSecurityContextSecurity context for the AKO pods{} (nil)
securityContex.allowPrivilegeEscalationtSet allowPrivilegeEscalationt in Security context for the AKO containerfalse
livenessProbeLiveliness probe for the AKO containerinitialDelaySeconds: 15, periodSeconds: 20, timeoutSeconds: 1, successThreshold: 1, failureThreshold: 3
readinessProbeReadiness probe for the AKO containerinitialDelaySeconds: 5, periodSeconds: 10, timeoutSeconds: 1, successThreshold: 1, failureThreshold: 3
kubeRBACProxy.image.repositoryKube RBAC Proxy image repository containergcr.io/kubebuilder/kube-rbac-proxy
kubeRBACProxy.image.tagKube RBAC Proxy image tagv0.16.0
kubeRBACProxy.image.pullPolicyKube RBAC Proxy image pull policyIfNotPresent
kubeRBACProxy.portKube RBAC proxy listening port8443
kubeRBACProxy.resourcesKube RBAC Proxy container resourcerequests.cpu: 5m, requests.memory: 64Mi , limits.cpu: 500m, limits.memory: 128Mi