Install the Aerospike Kubernetes Operator Using Helm
Overviewโ
This page describes how to use Helm charts to install the Aerospike Kubernetes Operator (AKO).
Helm charts are groups of YAML files that describe Kubernetes resources and their current configurations. If you plan to use Helm charts to deploy Aerospike clusters, you also need to use Helm to install the AKO on your Kubernetes deployment.
Requirementsโ
You first need an existing Kubernetes cluster with kubectl configured to use that cluster. See the Requirements page for Kubernetes version and other requirements.
Before installing the AKO, you must install cert-manager. AKO uses admission webhooks, which needs TLS certificates issued by cert-manager. Follow the official cert-manager instructions to install cert-manager on your Kubernetes cluster.
In Kubernetes version 1.23 or later, Pod Security Admission (PSA) is enabled by default. Make sure the namespace where the AKO is installed has either baseline
or privileged
Pod Security Standard level set.
The restricted
level is not supported by Aerospike. The default Pod Security Standard level in Kubernetes 1.23 is privileged
.
For more details, see Apply Pod Security Standards
1. Get the Helm chartsโ
To get the Helm chart for the AKO, add the Helm repository:
helm repo add aerospike https://aerospike.github.io/aerospike-kubernetes-enterprise
If the Helm repository is already added, update the index:
helm repo update
2. Deploy the AKOโ
Run the following command to deploy the AKO:
helm install aerospike-kubernetes-operator aerospike/aerospike-kubernetes-operator --version=3.3.1 --set watchNamespaces="aerospike"
3. Check AKO logsโ
AKO runs as two replicas by default, for higher availability. Run the following command to follow the logs for the AKO pods.
kubectl -n <release-namespace> logs -f deployment/aerospike-kubernetes-operator manager
Output:
2023-08-01T09:07:02Z INFO setup legacy OLM < 0.17 directory is present - initializing webhook server
2023-08-01T09:07:03Z INFO controller-runtime.metrics Metrics server is starting to listen {"addr": "127.0.0.1:8080"}
2023-08-01T09:07:03Z INFO setup Init aerospike-server config schemas
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.3.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.2"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.6.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.2.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.4.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.7.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.7.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.1"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.6.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.2.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.4.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.3.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.2.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.3"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.5.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.3.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.3.1"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.4.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.8.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.9.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "7.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "7.1.0"}
2023-08-01T09:07:03Z INFO aerospikecluster-resource Registering mutating webhook to the webhook server
2023-08-01T09:07:03Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-asdb-aerospike-com-v1-aerospikecluster"}
2023-08-01T09:07:03Z INFO controller-runtime.builder skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called {"GVK": "asdb.aerospike.com/v1, Kind=AerospikeCluster"}
2023-08-01T09:07:03Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "asdb.aerospike.com/v1, Kind=AerospikeCluster", "path": "/validate-asdb-aerospike-com-v1-aerospikecluster"}
2023-08-01T09:07:03Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-asdb-aerospike-com-v1-aerospikecluster"}
2023-08-01T09:07:03Z INFO setup Starting manager
2023-08-01T09:07:03Z INFO controller-runtime.webhook.webhooks Starting webhook server
2023-08-01T09:07:03Z INFO Starting server {"path": "/metrics", "kind": "metrics", "addr": "127.0.0.1:8080"}
2023-08-01T09:07:03Z INFO controller-runtime.certwatcher Updated current TLS certificate
2023-08-01T09:07:03Z INFO Starting server {"kind": "health probe", "addr": "[::]:8081"}
I0801 09:07:03.213295 1 leaderelection.go:248] attempting to acquire leader lease operators/96242fdf.aerospike.com...
2023-08-01T09:07:03Z INFO controller-runtime.webhook Serving webhook server {"host": "", "port": 9443}
2023-08-01T09:07:03Z INFO controller-runtime.certwatcher Starting certificate watcher
4. Grant permissions to the target namespacesโ
The AKO is installed in the <release-namespace>
namespace. Grant additional permissions (by configuring ServiceAccounts and RoleBindings/ClusterRoleBindings) for the target Kubernetes namespace where the Aerospike clusters are created.
There are two ways to grant permission for the target namespaces:
Using kubectlโ
The procedure to use the namespace aerospike
is as follows:
Create the namespaceโ
Create the Kubernetes namespace if not already created:
kubectl create namespace aerospike
Create a service accountโ
kubectl -n aerospike create serviceaccount aerospike-operator-controller-manager
Create RoleBinding/ClusterRoleBinding for Aerospike clusterโ
Next, create a RoleBinding or ClusterRoleBinding as per requirement to attach this service account to ClusterRole aerospike-cluster
.
This ClusterRole is created as part of AKO installation and grants Aerospike cluster permission to service account.
- For using Kubernetes native Pod only network to connect to Aerospike cluster create RoleBinding:
kubectl -n aerospike create rolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager
- For connecting to Aerospike cluster from outside Kubernetes create ClusterRoleBinding:
kubectl create clusterrolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager
For attaching multiple service accounts of different namespaces in one go, add multiple --serviceaccount
params in above command
Example: To attach service accounts of aerospike and aerospike1 namespace
kubectl create clusterrolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager --serviceaccount=aerospike1:aerospike-operator-controller-manager
If the required ClusterRoleBinding already exists in cluster, edit it to attach new service account:
kubectl edit clusterrolebinding aerospike-cluster
This command launches an editor. Append the following lines to the subjects section:
# A new entry for aerospike.
# Replace aerospike with your namespace
- kind: ServiceAccount
name: aerospike-operator-controller-manager
namespace: aerospike
Save and ensure that the changes are applied.
Using akoctl pluginโ
For instructions on installing the akoctl
plugin, refer to akoctl installation.
The procedure to use the namespace aerospike
is as follows:
- For using Kubernetes native Pod only network to connect to Aerospike cluster grant namespace scope permission:
kubectl akoctl auth create -n aerospike --cluster-scope=false
- For connecting to Aerospike cluster from outside Kubernetes grant cluster scope permission:
kubectl akoctl auth create -n aerospike
For granting permission of multiple namespaces in one go, specify comma separated namespace list in -n
param
Example: To grant permission for aerospike and aerospike1 namespace
kubectl akoctl auth create -n aerospike,aerospike1
Configurationsโ
Name | Description | Default |
---|---|---|
replicas | Number of AKO replicas | 2 |
operatorImage.repository | AKO image repository | aerospike/aerospike-kubernetes-operator |
operatorImage.tag | AKO image tag | 3.3.1 |
operatorImage.pullPolicy | Image pull policy | IfNotPresent |
imagePullSecrets | Secrets containing credentials to pull AKO image from a private registry | {} (nil) |
rbac.create | Set this to true to let helm chart automatically create RBAC resources necessary for the AKO | true |
rbac.serviceAccountName | If rbac.create=false , provide a service account name to be used with the AKO deployment | default |
healthPort | Health port | 8081 |
metricsPort | Metrics port | 8080 |
certs.create | Set this to true to let Helm chart automatically create certificates using cert-manager | true |
certs.webhookServerCertSecretName | Kubernetes secret name that contains webhook server certificates | webhook-server-cert |
watchNamespaces | Namespaces to watch. The AKO watches for AerospikeCluster custom resources in these namespaces. | default |
aerospikeKubernetesInitRegistry | Registry used to pull aerospike-init image | docker.io |
resources | Resource requests and limits for the AKO pods | requests.cpu: 10m , requests.memory: 64Mi , limits.cpu: 200m , limits.memory: 256Mi |
affinity | Affinity rules for the AKO deployment | {} (nil) |
extraEnv | Extra environment variables that are passed into the AKO pods | {} (nil) |
nodeSelector | Node selectors for scheduling the AKO pods based on node labels | {} (nil) |
tolerations | Tolerations for scheduling the AKO pods based on node taints | {} (nil) |
annotations | Annotations for the AKO deployment | {} (nil) |
labels | Labels for the AKO deployment | {} (nil) |
podAnnotations | Annotations for the AKO pods | {} (nil) |
podLabels | Labels for the AKO pods | {} (nil) |
metricsService.labels | Labels for the AKO's metrics service | {} (nil) |
metricsService.annotations | Annotations for the AKO's metrics service | {} (nil) |
metricsService.port | The AKO's metrics service port | 8443 |
metricsService.type | The AKO's metrics service type | ClusterIP |
webhookService.labels | Labels for the AKO's webhook service | {} (nil) |
webhookService.annotations | Annotations for the AKO's webhook service | {} (nil) |
webhookService.port | The AKO's webhook service port | 443 |
webhookService.targetPort | The AKO's webhook target port | 9443 |
webhookService.type | The AKO's webhook service type | ClusterIP |
podSecurityContext | Security context for the AKO pods | {} (nil) |
securityContex.allowPrivilegeEscalationt | Set allowPrivilegeEscalationt in Security context for the AKO container | false |
livenessProbe | Liveliness probe for the AKO container | initialDelaySeconds: 15 , periodSeconds: 20 , timeoutSeconds: 1 , successThreshold: 1 , failureThreshold: 3 |
readinessProbe | Readiness probe for the AKO container | initialDelaySeconds: 5 , periodSeconds: 10 , timeoutSeconds: 1 , successThreshold: 1 , failureThreshold: 3 |
kubeRBACProxy.image.repository | Kube RBAC Proxy image repository container | gcr.io/kubebuilder/kube-rbac-proxy |
kubeRBACProxy.image.tag | Kube RBAC Proxy image tag | v0.15.0 |
kubeRBACProxy.image.pullPolicy | Kube RBAC Proxy image pull policy | IfNotPresent |
kubeRBACProxy.port | Kube RBAC proxy listening port | 8443 |
kubeRBACProxy.resources | Kube RBAC Proxy container resource | requests.cpu: 5m , requests.memory: 64Mi , limits.cpu: 500m , limits.memory: 128Mi |