Webinar

Feed hungry AI systems more data, faster and efficiently

 

 

About this webinar

Presented by
Kiran Matty, Director of Product Management, Aerospike

About this talk
ML models run better with more data; and the more iterations and the more training, tuning and validation you can do, the better your results. So what’s the big deal?

It can often be about speeding up e.g. Spark jobs for operational efficiency. The faster a job runs, the more jobs can be run on the same cluster, maximizing returns on time and investments significantly. This in turn reduces the training and inference times significantly.

Data prep is painful. The problem is having an online system with streaming data and the need to make an inference in milliseconds. The problem is wanting to pull disparate signal data from sources from different countries and data centers in real-time.

AI/ML systems have insatiable appetites for data.

In this session, we’ll cover how to:

  • Ingest high velocity, high volume at the edge and with “multi-site clustering”
  • Reduce training and inference times
  • Accelerate Spark jobs with massive parallelization
  • Create a high throughput and low latency streaming pipeline
  • Use fewer resources overall