Applying AI/ML Methodologies to Categorize Storage Workloads and Replaying them in Standard Test Environments

Abstract

With the complexity of applications increasing every day, the workloads generated by these applications are complicated and hard to replicate in test environments. We propose an efficient method to synthesize a close approximation of these application workloads based on analyzing the historic autosupport data from field using an iterative mechanism and also a method to store and replay these workloads in the test environment for achieving the goals of customer driven testing. Problem Statement: As we align more towards customer driven testing, the quantity and complexity of workloads grows exponentially. Most of our regression testing uses workloads designed for testing the functionality and stressing the array but some of the corner cases or race conditions are only caught using complex customer workloads. It is very difficult, and time consuming to model and synthesize customer workloads whenever there is a need to reproduce any escalations or POCs (proof of concept). The existing IO tools don’t have any direct mechanism to simulate these customer workloads. Some tools do provide capability to capture and replay workloads but only at the host level. They also lack the capability to analyze array stats which is more significant for modelling customer workloads. Solution: In this paper we describe two solutions, viz. Workload Analyzer and Synthesizer (WAS) which analyses the array autosupport (ASUP) data to synthesize customer workloads and the Workload Matrix Solution1 (WMS) to integrate and deploy these synthesized workloads in existing test environments.

Dhishankar Sengupta
Hewlett Packard Enterprise (HPE)
Related Sessions