Skip to main content
Loading

Install with Helm Chart on Kubernetes

Overviewโ€‹

This page describes how to set up Aerospike Vector Search (AVS) using Kubernetes and Helm, which includes the following:

  • Create a GKE cluster
  • Use Aerospike Kubernetes Operator (AKO) to deploy an Aerospike cluster
  • Deploy the AVS cluster and necessary operators, configurations, node pools, etc.
  • Configure monitoring using Prometheus
  • Deploy a specific Helm chart for AVS

Prerequisitesโ€‹

  • gcloud CLI

  • Helm 3+ CLI

  • A valid feature-key file features.conf with the vector-service feature enabled.

A full version of these commands is available as a script in the repository: Aerospike Vector Search Examples. You can clone the repo and run this all using the full script, or copy and paste each command from this page.

Set environment variablesโ€‹

The script relies on the following environment variables. Configure these to your preference.

export PROJECT_ID="$(gcloud config get-value project)"
export CLUSTER_NAME="aerospike-vector-search"
export NODE_POOL_NAME_AEROSPIKE="aerospike-pool"
export NODE_POOL_NAME_AVS="avs-pool"
export ZONE="us-central1-c"
export FEATURES_CONF="./features.conf"

Create the GKE clusterโ€‹

Create a single node Kubernetes cluster that you will expand later by adding node pools for Aerospike Database and AVS, respectively.

if ! gcloud container clusters create "$CLUSTER_NAME" \
--project "$PROJECT_ID" \
--zone "$ZONE" \
--num-nodes 1 \
--disk-type "pd-standard" \
--disk-size "100"; then
echo "Failed to create GKE cluster"
else
echo "GKE cluster created successfully."
fi

Install Aerospike Database (ASDB)โ€‹

The next three steps describe how to install and configure the Aerospike Database.

  1. Create and configure the ASDB Node Pool. This section creates a node pool for Aerospike, labels the nodes, and handles any errors in the process. The node pool ensures that Aerospike Database runs on dedicated nodes.

    if ! gcloud container node-pools create "$NODE_POOL_NAME_AEROSPIKE" \
    --cluster "$CLUSTER_NAME" \
    --project "$PROJECT_ID" \
    --zone "$ZONE" \
    --num-nodes 3 \
    --local-ssd-count 2 \
    --disk-type "pd-standard" \
    --disk-size "100" \
    --machine-type "n2d-standard-2"; then
    echo "Failed to create Aerospike node pool"
    else
    echo "Aerospike node pool added successfully."
    fi

    kubectl get nodes -l cloud.google.com/gke-nodepool="$NODE_POOL_NAME_AEROSPIKE" -o name | \
    xargs -I {} kubectl label {} aerospike.com/node-pool=default-rack --overwrite
  2. Install the Aerospike Kubernetes Operator (AKO).

    curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0
    kubectl create -f https://operatorhub.io/install/aerospike-kubernetes-operator.yaml

    while true; do
    if kubectl --namespace operators get deployment/aerospike-operator-controller-manager &> /dev/null; then
    kubectl --namespace operators wait \
    --for=condition=available --timeout=180s deployment/aerospike-operator-controller-manager
    break

    else
    echo "AKO added successfully."
    fi
    done

    kubectl get nodes -l cloud.google.com/gke-nodepool="$NODE_POOL_NAME_AEROSPIKE" -o name | \
    xargs -I {} kubectl label {} aerospike.com/node-pool=default-rack --overwrite
  3. Install the Aerospike Database using AKO and configure the appropriate secrets and storage settings.

    curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/install.sh | bash -s v0.25.0
    kubectl create -f https://operatorhub.io/install/aerospike-kubernetes-operator.yaml

    while true; do
    if kubectl --namespace operators get deployment/aerospike-operator-controller-manager &> /dev/null; then
    kubectl --namespace operators wait \
    --for=condition=available --timeout=180s deployment/aerospike-operator-controller-manager
    break
    else
    sleep 10
    fi
    done

    kubectl create namespace aerospike
    kubectl --namespace aerospike create serviceaccount aerospike-operator-controller-manager
    kubectl create clusterrolebinding aerospike-cluster \
    --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager

    kubectl --namespace aerospike create secret generic aerospike-secret --from-file=features.conf="$FEATURES_CONF"
    kubectl --namespace aerospike create secret generic auth-secret --from-literal=password='admin123'

    kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-kubernetes-operator/master/config/samples/storage/gce_ssd_storage_class.yaml

    kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/ssd_storage_cluster_cr.yaml
  4. Validate the installation by verifying that the pods are healthy in the aerospike namespace.

    kubectl get pods -n aerospike
    NAME READY STATUS RESTARTS AGE
    aerocluster-0-0 2/2 Running 0 109s
    aerocluster-0-1 2/2 Running 0 109s
    aerocluster-0-2 2/2 Running 0 109s

Install AVSโ€‹

  1. Create and configure the AVS node pool.

    if ! gcloud container node-pools create "$NODE_POOL_NAME_AVS" \
    --cluster "$CLUSTER_NAME" \
    --project "$PROJECT_ID" \
    --zone "$ZONE" \
    --num-nodes 3 \
    --disk-type "pd-standard" \
    --disk-size "100" \
    --machine-type "e2-highmem-4"; then
    echo "Failed to create avs node pool"
    else
    echo "avs node pool added successfully."
    fi

    kubectl get nodes -l cloud.google.com/gke-nodepool="$NODE_POOL_NAME_AVS" -o name | \
    xargs -I {} kubectl label {} aerospike.com/node-pool=avs --overwrite
  2. Configure AVS namespace and secrets.

    The following commands set up the AVS namespace and create secrets for the AVS cluster.

    kubectl create namespace avs

    kubectl --namespace avs create secret generic aerospike-secret --from-file=features.conf="$FEATURES_CONF"
    kubectl --namespace avs create secret generic auth-secret --from-literal=password='admin123'
  3. Deploy AVS using the Helm chart.


    helm repo add aerospike-helm https://artifact.aerospike.io/artifactory/api/helm/aerospike-helm
    helm repo update
    helm install avs-gke --values https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/avs-gke-values.yaml --namespace avs aerospike-helm/aerospike-vector-search --version 0.4.0 --wait

    You can validate the installation with the following:

    kubectl get pods -n avs
    NAME READY STATUS RESTARTS AGE
    avs-gke-aerospike-vector-search-0 1/1 Running 0 60s
    avs-gke-aerospike-vector-search-1 1/1 Running 0 60s
    avs-gke-aerospike-vector-search-2 1/1 Running 0 60s

Install networking and monitoring (optional)โ€‹

This section describes how to set up the Prometheus monitoring stack and Istio networking stack. These steps are optional, and you may consider other monitoring and networking options.

  1. Deploy and configure Prometheus monitoring stack.

    These commands set up the monitoring stack using Prometheus and apply additional monitoring configurations:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    helm repo update
    helm install monitoring-stack prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace

    kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/monitoring/aerospike-exporter-service.yaml
    kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/monitoring/aerospike-servicemonitor.yaml
    kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/monitoring/avs-servicemonitor.yaml
  2. Deploy and configure Istio.

    These commands deploy Istio for application load balancing. A layer 7 load balancer is recommended.

    helm repo add istio https://istio-release.storage.googleapis.com/charts
    helm repo update

    helm install istio-base istio/base --namespace istio-system --set defaultRevision=default --create-namespace --wait
    helm install istiod istio/istiod --namespace istio-system --create-namespace --wait
    helm install istio-ingress istio/gateway \
    --values https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/istio/istio-ingressgateway-values.yaml \
    --namespace istio-ingress \
    --create-namespace \
    --wait

    kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/istio/gateway.yaml
    kubectl apply -f https://raw.githubusercontent.com/aerospike/aerospike-vector-search-examples/main/kubernetes/manifests/istio/avs-virtual-service.yaml

Run an example applicationโ€‹

You can validate the deployment by connecting to the cluster and running one of the AVS example applications.

  1. Set AVS_HOST to the external IP of your cluster. If you installed Istio you can do this with the following command.

    export AVS_HOST=$(kubectl get services -n istio-ingress -o jsonpath='{.items[?(@.metadata.name=="istio-ingress")].status.loadBalancer.ingress[0].ip}')
  2. Navigate to /aerospike-vector-search-examples/quote-semantic-search/quote-search, set environment variables, and launch the quote search app:

    export AVS_IS_LOADBALANCER=True && \
    waitress-serve --host 127.0.0.1 --port 8080 --threads 32 quote_search:app
  3. Go to http://localhost:8080/search and enter a search term like "Tell me about whales" to perform a semantic search.

Supported configurationโ€‹

The following table describes all of the supported configuration parameters in the AVS Helm chart:

ParameterDescriptionDefault
replicaCountConfigures the number of AVS instance pods to run.'1'
imageConfigures AVS image repository, tag and pull policy.See values.yaml.
imagePullSecretsFor private Docker registries, when authentication is needed.See values.yaml.
aerospikeVectorSearchConfigAVS cluster configuration deployed to /etc/aerospike-vector-search/aerospike-vector-search.yml.See values.yaml.
initContainersList of initContainers added to each AVS pod for custom cluster behavior.
serviceAccountService Account details like name and annotations.See values.yaml.
podAnnotationsAdditional pod annotations. Must be specified as a map of annotation names to annotation values.{}
podLabelsAdditional pod labels. Must be specified as a map of label names to label values.{}
podSecurityContextPod security context{}
securityContextContainer security context{}
serviceLoad balancer configuration. For more details see Load Balancer Service.{}
resourcesResource requests and limits for the AVS pods.{}
autoscalingEnable the horizontal pod auto-scaler.See values.yaml.
extraVolumesList of additional volumes to attach to the AVS pod.See values.yaml.
extraVolumeMountsExtra volume mounts corresponding to the volumes added to extraVolumes.See values.yaml.
extraSecretVolumeMountsExtra secret volume mounts corresponding to the volumes added to extraVolumes.See values.yaml.
affinityAffinity rules if any for the pods.{}
nodeSelectorNode selector for the pods.{}
tolerationsTolerations for the pods.{}