Skip to main content
Loading
Version: Operator 3.2.2

Install the Aerospike Kubernetes Operator from OperatorHub

This procedure applies to:

note

For Kubernetes 1.23 version or later, Pod Security Admission (PSA) is enabled by default. Make sure the namespace where the Aerospike Operator is installed has either baseline or privileged Pod Security Standard level set. The restricted level is not supported by Aerospike. The default Pod Security Standard level in Kubernetes 1.23 is privileged. For more details, see Apply Pod Security Standards

Install the OLM and Operatorโ€‹

Install the Operator Lifecycle Manager (OLM) on your Kubernetes cluster with the command:

curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0

Next, install the Aerospike Kubernetes Operator:

kubectl create -f https://operatorhub.io/install/aerospike-kubernetes-operator.yaml

Verify the Operator is Runningโ€‹

Verify that the Operator's CSV is in the Succeeded phase.

kubectl get csv -n operators aerospike-kubernetes-operator.v3.2.2 -w

You will see output similar to the following:

NAME                                   DISPLAY                         VERSION   REPLACES   PHASE
aerospike-kubernetes-operator.v3.2.2 Aerospike Kubernetes Operator 3.2.2 Succeeded

Check Operator Logsโ€‹

The Operator runs as two replicas by default, for higher availability. Run the following command to follow the logs for the Operator pods.

kubectl -n operators logs -f deployment/aerospike-operator-controller-manager manager

Output:

2023-08-01T09:07:02Z    INFO    setup   legacy OLM < 0.17 directory is present - initializing webhook server
2023-08-01T09:07:03Z INFO controller-runtime.metrics Metrics server is starting to listen {"addr": "127.0.0.1:8080"}
2023-08-01T09:07:03Z INFO setup Init aerospike-server config schemas
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.3.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.2"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.6.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.2.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.4.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.7.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.7.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.1"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.6.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.0.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.1.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.2.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.4.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "6.3.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.2.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.5.3"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.5.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "5.3.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.3.1"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.4.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.8.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "4.9.0"}
2023-08-01T09:07:03Z DEBUG schema-map Config schema added {"version": "7.0.0"}
2023-08-01T09:07:03Z INFO aerospikecluster-resource Registering mutating webhook to the webhook server
2023-08-01T09:07:03Z INFO controller-runtime.webhook Registering webhook {"path": "/mutate-asdb-aerospike-com-v1-aerospikecluster"}
2023-08-01T09:07:03Z INFO controller-runtime.builder skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called {"GVK": "asdb.aerospike.com/v1, Kind=AerospikeCluster"}
2023-08-01T09:07:03Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "asdb.aerospike.com/v1, Kind=AerospikeCluster", "path": "/validate-asdb-aerospike-com-v1-aerospikecluster"}
2023-08-01T09:07:03Z INFO controller-runtime.webhook Registering webhook {"path": "/validate-asdb-aerospike-com-v1-aerospikecluster"}
2023-08-01T09:07:03Z INFO setup Starting manager
2023-08-01T09:07:03Z INFO controller-runtime.webhook.webhooks Starting webhook server
2023-08-01T09:07:03Z INFO Starting server {"path": "/metrics", "kind": "metrics", "addr": "127.0.0.1:8080"}
2023-08-01T09:07:03Z INFO controller-runtime.certwatcher Updated current TLS certificate
2023-08-01T09:07:03Z INFO Starting server {"kind": "health probe", "addr": "[::]:8081"}
I0801 09:07:03.213295 1 leaderelection.go:248] attempting to acquire leader lease operators/96242fdf.aerospike.com...
2023-08-01T09:07:03Z INFO controller-runtime.webhook Serving webhook server {"host": "", "port": 9443}
2023-08-01T09:07:03Z INFO controller-runtime.certwatcher Starting certificate watcher

Grant permissions to the target namespacesโ€‹

The Operator is installed in the operators namespace. Grant additional permission (by configuring ServiceAccounts and RoleBindings/ClusterRoleBindings) for the target Kubernetes namespace where the Aerospike clusters are created.

There are two ways to grant permission for the target namespaces:

  1. Using kubectl
  2. Using akoctl plugin

Using kubectlโ€‹

The procedure to use the namespace aerospike is as follows:

Create the namespaceโ€‹

Create the Kubernetes namespace if not already created:

kubectl create namespace aerospike

Create a service accountโ€‹

kubectl -n aerospike create serviceaccount aerospike-operator-controller-manager

Create RoleBinding/ClusterRoleBinding for Aerospike clusterโ€‹

Next, create a RoleBinding or ClusterRoleBinding as per requirement to attach this service account to ClusterRole aerospike-cluster. This ClusterRole is created as part of Operator installation and grants Aerospike cluster permission to service account.

  • For using Kubernetes native Pod only network to connect to Aerospike cluster create RoleBinding:
kubectl -n aerospike create rolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager
  • For connecting to Aerospike cluster from outside Kubernetes create ClusterRoleBinding:
kubectl create clusterrolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager
tip

For attaching multiple service accounts of different namespaces in one go, add multiple --serviceaccount params in above command

Example: To attach service accounts of aerospike and aerospike1 namespace
kubectl create clusterrolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager --serviceaccount=aerospike1:aerospike-operator-controller-manager

If the required ClusterRoleBinding already exists in cluster, edit it to attach new service account:

kubectl edit clusterrolebinding aerospike-cluster

This command launches an editor. Append the following lines to the subjects section:

  # A new entry for aerospike.
# Replace aerospike with your namespace
- kind: ServiceAccount
name: aerospike-operator-controller-manager
namespace: aerospike

Save and ensure that the changes are applied.

Using akoctl pluginโ€‹

For instructions on installing the akoctl plugin, see akoctl installation.

The procedure to use the namespace aerospike is as follows:

  • For using Kubernetes native Pod only network to connect to Aerospike cluster grant namespace scope permission:
kubectl akoctl auth create -n aerospike --cluster-scope=false
  • For connecting to Aerospike cluster from outside Kubernetes grant cluster scope permission:
kubectl akoctl auth create -n aerospike
tip

For granting permission of multiple namespaces in one go, specify comma separated namespace list in -n param

Example: To grant permission for aerospike and aerospike1 namespace
kubectl akoctl auth create -n aerospike,aerospike1