Install with Helm Chart on Kubernetes
This page describes how to set up Aerospike Vector Search (AVS) using Google Kubernetes Engine and Helm.
Overviewโ
Setting up AVS using Kubernetes and Helm takes place in the following stages:
- Create a GKE cluster
- Use Aerospike Kubernetes Operator (AKO) to deploy an Aerospike cluster
- Deploy the AVS cluster and necessary operators, configurations, and node pools.
- Configure monitoring using Prometheus
- Deploy a specific Helm chart for AVS
Prerequisitesโ
- git CLI
- gcloud CLI
- GKE Auth Plugin
- Helm 3+ CLI
- dig CLI
- A valid feature-key file
features.conf
with thevector-service
feature enabled.
1. Clone the AVS Examples repositoryโ
The AVS Examples GitHub repository includes a bash script to install all the necessary components.
First, clone the repository.
git clone https://github.com/aerospike/aerospike-vector-search-examples.git && \
cd aerospike-vector-search-examples/kubernetes/
2. Copy your feature-key fileโ
Ensure your feature-key file is locally available in the repository.
In this example, features.conf
gets copied from the Downloads folder to aerospike-vector-search-examples/kubernetes/
cp ~/Downloads/features.conf .
3. Run the command with the specified arguments.โ
Run the bash script and specify a GKE cluster name.
In this example, the GKE cluster name is vector
.
./full-create-and-install-gke.sh --cluster-name vector --run-insecure
4. Test your cluster setup.โ
The script will provide your external IP, or you can run.
kubectl get services -n istio-ingress
Next you can do any of the following.
4a. Download and install asvec.โ
The asvec CLI tool will allow you to connect to the cluster and see your nodes. This can be used to confirm your connection as well as manage index and users.
4b. Try out a basic searchโ
Use our basic search notebook to walk through the using the Python client in your application.
4c. Install dashboards and connect monitoringโ
Download and install the dashboards
git clone https://github.com/aerospike/aerospike-monitoring.git && \
./import-dashboards.sh ./aerospike-monitoring/config/grafana/dashboards
Now you can check out the monitoring to see the health of your cluster.
Port forward to your grafana instance
kubectl port-forward deployments/monitoring-stack-grafana 3000:3000 -n monitoring
Open http://localhost:3000/login in a browser.
The default credentials are admin:prom-operator
.
Navigate to the Aerospike Vector Search dashboard.
Install Manuallyโ
The full-create-and-install.sh
script contains the following commands.
You can run these commands one section at a time to install manually.
Set environment variablesโ
The script relies on the following environment variables. Configure these to your preference.
export PROJECT_ID="$(gcloud config get-value project)"
export CLUSTER_NAME="aerospike-vector-search"
export NODE_POOL_NAME_AEROSPIKE="aerospike-pool"
export NODE_POOL_NAME_AVS="avs-pool"
export ZONE="us-central1-c"
export FEATURES_CONF="./features.conf"
Create the GKE clusterโ
Create a single-node Kubernetes cluster that you will expand later by adding node pools for Aerospike Database and AVS.
if ! gcloud container clusters create "$CLUSTER_NAME" \
--project "$PROJECT_ID" \
--zone "$ZONE" \
--num-nodes 1 \
--disk-type "pd-standard" \
--disk-size "100"; then
echo "Failed to create GKE cluster"
else
echo "GKE cluster created successfully."
fi
Install Aerospike Database (ASDB)โ
The next three steps describe how to install and configure the Aerospike Database.
Create and configure the ASDB Node Pool. This section creates a node pool for Aerospike, labels the nodes, and handles any errors in the process. The node pool ensures that Aerospike Database runs on dedicated nodes rather than potentially sharing resources with other applications running on GKE from other users.
if ! gcloud container node-pools create "$NODE_POOL_NAME_AEROSPIKE" \
--cluster "$CLUSTER_NAME" \
--project "$PROJECT_ID" \
--zone "$ZONE" \
--num-nodes 3 \
--local-ssd-count 2 \
--disk-type "pd-standard" \
--disk-size "100" \
--machine-type "n2d-standard-2"; then
echo "Failed to create Aerospike node pool"
else
echo "Aerospike node pool added successfully."
fi
kubectl get nodes -l cloud.google.com/gke-nodepool="$NODE_POOL_NAME_AEROSPIKE" -o name | \
xargs -I {} kubectl label {} aerospike.com/node-pool=default-rack --overwriteInstall Aerospike Kubernetes Operator (AKO). This set of commands downloads and installs the Operator Lifecycle Manager, deploys AKO with a configuration file, waits for it to be fully available, and then labels nodes in the node pool to designate them for Aerospike workloads.
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0
kubectl create -f https://operatorhub.io/install/aerospike-kubernetes-operator.yaml
while true; do
if kubectl --namespace operators get deployment/aerospike-operator-controller-manager &> /dev/null; then
kubectl --namespace operators wait \
--for=condition=available --timeout=180s deployment/aerospike-operator-controller-manager
break
else
echo "AKO added successfully."
fi
done
kubectl get nodes -l cloud.google.com/gke-nodepool="$NODE_POOL_NAME_AEROSPIKE" -o name | \
xargs -I {} kubectl label {} aerospike.com/node-pool=default-rack --overwriteInstall the Aerospike Database using AKO and configure the appropriate secrets and storage settings.
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0
kubectl create -f https://operatorhub.io/install/aerospike-kubernetes-operator.yaml
while true; do
if kubectl --namespace operators get deployment/aerospike-operator-controller-manager &> /dev/null; then
kubectl --namespace operators wait \
--for=condition=available --timeout=180s deployment/aerospike-operator-controller-manager
break
else
sleep 10
fi
done
kubectl create namespace aerospike
kubectl --namespace aerospike create serviceaccount aerospike-operator-controller-manager
kubectl create clusterrolebinding aerospike-cluster \
--clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager
kubectl --namespace aerospike create secret generic aerospike-secret --from-file=features.conf="$FEATURES_CONF"
kubectl --namespace aerospike create secret generic auth-secret --from-literal=password='admin123'
kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-kubernetes-operator/master/config/samples/storage/gce_ssd_storage_class.yaml
kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/ssd_storage_cluster_cr.yamlValidate the installation by verifying that the pods are healthy in the
aerospike
namespace.kubectl get pods -n aerospike
NAME READY STATUS RESTARTS AGE
aerocluster-0-0 2/2 Running 0 109s
aerocluster-0-1 2/2 Running 0 109s
aerocluster-0-2 2/2 Running 0 109s
Install AVSโ
Create and configure a new node pool for AVS.
if ! gcloud container node-pools create "$NODE_POOL_NAME_AVS" \
--cluster "$CLUSTER_NAME" \
--project "$PROJECT_ID" \
--zone "$ZONE" \
--num-nodes 3 \
--disk-type "pd-standard" \
--disk-size "100" \
--machine-type "e2-highmem-4"; then
echo "Failed to create avs node pool"
else
echo "avs node pool added successfully."
fi
kubectl get nodes -l cloud.google.com/gke-nodepool="$NODE_POOL_NAME_AVS" -o name | \
xargs -I {} kubectl label {} aerospike.com/node-pool=avs --overwriteConfigure AVS namespace and secrets. The following commands set up the
avs
namespace and create secrets for the AVS cluster.kubectl create namespace avs
kubectl --namespace avs create secret generic aerospike-secret --from-file=features.conf="$FEATURES_CONF"
kubectl --namespace avs create secret generic auth-secret --from-literal=password='admin123'Deploy AVS using the Helm chart.
helm repo add aerospike-helm https://artifact.aerospike.io/artifactory/api/helm/aerospike-helm
helm repo update
helm install avs-gke --values https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/avs-gke-values.yaml --namespace avs aerospike-helm/aerospike-vector-search --version 0.4.0 --waitValidate the installation:
kubectl get pods -n avs
NAME READY STATUS RESTARTS AGE
avs-gke-aerospike-vector-search-0 1/1 Running 0 60s
avs-gke-aerospike-vector-search-1 1/1 Running 0 60s
avs-gke-aerospike-vector-search-2 1/1 Running 0 60s
Install networking and monitoring (optional)โ
This section describes how to set up the Prometheus monitoring stack and Istio networking stack. These steps are optional, and you may consider other monitoring and networking options.
Deploy and configure the Prometheus monitoring stack. These commands set up the monitoring stack using Prometheus and apply additional monitoring configurations:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install monitoring-stack prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace
kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/monitoring/aerospike-exporter-service.yaml
kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/monitoring/aerospike-servicemonitor.yaml
kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/monitoring/avs-servicemonitor.yamlDeploy and configure Istio. These commands deploy Istio for application load balancing. A layer 7 load balancer is recommended.
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install istio-base istio/base --namespace istio-system --set defaultRevision=default --create-namespace --wait
helm install istiod istio/istiod --namespace istio-system --create-namespace --wait
helm install istio-ingress istio/gateway \
--values https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/istio/istio-ingressgateway-values.yaml \
--namespace istio-ingress \
--create-namespace \
--wait
kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/istio/gateway.yaml
kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/istio/avs-virtual-service.yaml
Supported configurationโ
The following table describes all supported configuration parameters in the AVS Helm chart:
Parameter | Description | Default |
---|---|---|
replicaCount | Configures the number of AVS instance pods to run. | '1' |
image | Configures AVS image repository, tag and pull policy. | See values.yaml. |
imagePullSecrets | For private Docker registries, when authentication is needed. | See values.yaml. |
aerospikeVectorSearchConfig | AVS cluster configuration deployed to /etc/aerospike-vector-search/aerospike-vector-search.yml . | See values.yaml. |
initContainers | List of initContainers added to each AVS pod for custom cluster behavior. | |
serviceAccount | Service Account details like name and annotations. | See values.yaml. |
podAnnotations | Additional pod annotations. Must be specified as a map of annotation names to annotation values. | {} |
podLabels | Additional pod labels. Must be specified as a map of label names to label values. | {} |
podSecurityContext | Pod security context | {} |
securityContext | Container security context | {} |
service | Load balancer configuration. For more details see Load Balancer Service. | {} |
resources | Resource requests and limits for the AVS pods. | {} |
autoscaling | Enable the horizontal pod auto-scaler. | See values.yaml. |
extraVolumes | List of additional volumes to attach to the AVS pod. | See values.yaml. |
extraVolumeMounts | Extra volume mounts corresponding to the volumes added to extraVolumes . | See values.yaml. |
extraSecretVolumeMounts | Extra secret volume mounts corresponding to the volumes added to extraVolumes . | See values.yaml. |
affinity | Affinity rules if any for the pods. | {} |
nodeSelector | Node selector for the pods. | {} |
tolerations | Tolerations for the pods. | {} |