Archive 2015 Storage Developer Conference Abstracts

Break Out Sessions and Agenda Tracks Include:

Note: This agenda is a work in progress. Check back for updates on additional sessions as well as the agenda schedule.

 

 

BIRDS OF A FEATHER

 

 

Moving Data Protection to the Cloud: Trends, Challenges and Strategies

Abstract

There are various new ways and advantages to perform data protection using the Cloud. However, Developers need to carefully study all the alternative approaches and the experiences of others (good and bad) to avoid the pitfalls, especially during the transition from strictly local resources to cloud resources. At this BoF we will discuss:

Learning Objectives

  • Critical cloud data protection challenges
  • How to use the cloud for data protection
  • Pros and cons of various cloud data protection strategies
  • Experiences of others (good and bad) to avoid common pitfalls
  • Cloud standards in use – and why you need them

Getting Started with the CDMI Conformance Test Program

Abstract

The SNIA Cloud Data Management Interface (CDMI) is an ISO/IEC standard that enables cloud solution vendors to meet the growing need of interoperability for data stored in the cloud and provides end users with the ability to control the destiny of their data, ensuring hassle-free data access, data protection and data migration from one cloud service to another. The CDMI Conformance Test Program (CTP) is now available. Administered by Tata Consulting Services, the CDMI CTP validates a cloud product’s conformance to the CDMI standard. Come to this Birds of a Feather session to learn what the CTP program entails, details on the testing service that is offered, how to get the CTP process started, and pricing. Please note: The availability of CDMI conformance testing at the Cloud Plugfest happening at SDC.


Enabling Persistent Memory Applications with NVDIMMs

Abstract

Non-Volatile DIMMs, or NVDIMMs, have emerged as a go-to technology for boosting performance for next generation storage platforms. The standardization efforts around NVDIMMs have paved the way to simple, plug-n-play adoption. Join the SNIA SSSI NVDIMM Special Interest Group for an interactive discussion on "What's Next?" - what customers, storage developers, and the industry would like to see to improve and enhance NVDIMM integration and optimization.


Kinetic Open Storage – Scalable Objects

Philip Kufeldt, Toshiba
Erik Riedel, EMC

Abstract

Open Kinetic is an open source Collaborative Project formed under the auspices of the Linux Foundation dedicated to creating an open standard around Ethernet-enabled, key/value Kinetic devices.


Storage Architectures for IoT

Mark Carlson, Principal Engineer, Industry Standards, Toshiba

Abstract

The Internet of Things (IoT) is expected to produce massive amounts of data streaming in from sensors. This data needs to be stored and analyzed, sometimes in real time. What are the best storage architectures for this use case? Is Hyper-Converged an answer? What about In-Storage Compute? Come to this BoF to learn what ideas are out there and contribute your own.


Storage for Containers

Abhishek Kane, Senior Software Engineer, Symantec Corporation

Abstract

This talk explores the essential requirements to host mission critical database applications in containers. Specifically, it focuses on the following capabilities that the container ecosystem needs to provide to host such databases:

  1. Scalable and persistent data volumes
  2. High Availability for containers
  3. Migration of containers across hosts
  4. Disaster Recovery capability

This talk will demonstrate a case study on how the above goals can be met for Docker ecosystem. Docker ecosystem is not completely evolved to meet the needs of mission critical databases to be run in Docker containers. As a result, there is hesitation in moving enterprise class and mission critical databases from physical/virtual machine platforms to containers. Elaborating on each of the above objectives, the talk intends to inspire confidence to deploy mission critical databases in Docker containers.


Container Data Management Using Flocker

Madhuri Yechuri, Software Engineer, ClusterHQ

Abstract

Containerized micro services call for state be sticky when the application relocates across compute nodes. State associated with a container might involve secret keys used for trusted communication, log files that user wants to data mine at a later point, or data files that are part of application logic. System administrator who sets application relocate policy has no insight into possible state associated within an application running inside a container.

Flocker solves the problem of container data loss when an application relocates, voluntarily or involuntarily, across compute nodes as part of initial placement, maintenance mode, or load balancing workflows performed by orchestrators like Docker Swarm, Mesosphere Marathon, or Kubernetes.

This session includes an overview of Flocker, and a demonstration of Flocker in action using Docker Swarm as the orchestrator of choice.


SNIA and OpenStack: Standards and Open Source

Mark Carlson, SNIA Technical Council Vice Chair

Abstract

The SNIA has created a number of educational materials for OpenStack Storage, which have become some of the most popular content produced. The SNIA has now created a Task Force to investigate a new focused set of activities around OpenStack which may result in a new group targeted at the adoption of storage industry standards in this open source project. Come join the members of this new task force in discussing the requirements and needs in this space.


NAS Benchmarking and SFS 2014 Forum

Nick Principe, EMC Corporation & SPEC

Abstract

Join several SPEC SFS subcommittee members for discussions about SFS development work and an open Q&A session - bring your questions and feedback! We would also like to follow onto the very successful NAS Benchmarking session with an open Q&A with some of the presenters of that tutorial.


Active Directory Integration War Stories

Oliver Jones, Senior Software Engineer, EMC Isilon
Steven Danneman, Software Engineering Manager, EMC Isilon

Abstract

Every storage server eventually has to provide Active Directory authentication. Implementing an AD client can be as complicated as implementing an SMB or NFS server, with lots of gotchas along the way. Isilon will share our stories about the evolution (through many bugs) of our interoperability with Active Directory and we’d love to hear yours. Come for a guided, collaborative, session.


Using SPEC SFS with the SNIA Emerald Program for EPA Energy Star Data Center Storage Program

Wayne Adams, Chair, SNIA Green Storage Initiative
Carlos Pratt, Chair, SNIA Green Storage TWG

Abstract

The next storage platform category to be added into the EPA Data Center Storage program is NAS. Come learn what it takes in setting up a SNIA Emerald NAS testing environment with the SPEC SFS tools , the additional energy related instrumentation and data collection tools. Become involved in SNIA technical work to validate the test methodologies in prep for 2016. Don’t wait to be kicked in the “NAS” when an Energy Star rating gates selling your NAS solutions.


NVMe Over Fabrics

Panel: Dave Minturn, Intel; Idan Burstein, Mellanox; Christoph Helllwig, Qingbo Wang, HGST
Moderator: Zvonimir Bandic, HGST

Abstract

This one hour session with panel of experts from the industry will focus on explaining a need for new storage networking protocols for both NAND and emerging Non-volatile memory devices. The pressure to reduce network latency to scale comparable with new solid state devices requires rethinking and reengineering of storage networking protocols. We will discuss the benefits of NVMe over fabrics protocol that utilizes RDMA networking, and present recent measured prototyping data. Panel of experts will be available to answer questions from attendees.

 

 

CLOUD

 

 

Using CDMI to Manage Swift, S3, and Ceph Object Repositories

David Slik, Technical Director, NetApp, Inc

Abstract

The Cloud Data Management Interface is designed to provide namespace-based management functionality for the superset of object, file and block protocols. This makes it ideally suited for use with common protocols such as NFS, CIFS, iSCSI, Swift and S3. This session provides an overview of how CDMI interoperates with these protocols, and how the use of CDMI as a management protocol adds value to multi-protocol systems. Concrete examples and use cases from end-users and vendors will be highlighted.

Learning Objectives

  • Learn how to use CDMI to manage object repositories
  • Learn how to use CDMI to manage file systems
  • Learn how to use CDMI to manage block storage systems
  • Learn how CDMI works with multi-protocol systems

Unistore: A Unified Storage Architecture for Cloud Computing

Yong Chen, Assistant Professor, Texas Tech University

Abstract

Emerging large-scale applications on Cloud computing platform, such as information retrieval, data mining, online business, and social network, are data- rather than computation-intensive. Storage system is one of the most critical components for Cloud computing. The traditional hard disk drives (HDD) are current dominant storage devices in Clouds, but are notorious for long access latency and failure prone. The recently emerged storage class memory (SCM) such as Solid State Drives provides a new promising storage solution of high bandwidth, low latency, and mechanical component free, but with inherent limitations of small capacity, short lifetime, and high cost. This talk will introduce an ongoing effort from Texas Tech University and Nimboxx Inc. of building an innovative unified storage architecture (Unistore) with the co-existence and efficient integration of heterogeneous HDD and SCM devices for Cloud storage systems. We will introduce the Unistore design principle and rationale. We will also discuss the prototyping implementation with newly designed data distribution and placement algorithm. This talk is intended for SNIA/SDC general attendees.


The Developer's Dilemma: Do-It-Yourself Storage or Surrender Your Data?

Luke Behnke, VP of Product, Bitcasa

Abstract

Creating an app isn’t simple. Early in the process of designing the app, decisions have to be made around how app data will be stored, and for most developers the cloud is an obvious choice. At this point, developers need to make an important choice: invest time, energy and resources in creating their own DIY file systems that sits on top of public cloud infrastructure; or take the shortcut and use a cloud storage API, and surrender their users’ data to popular cloud storage services. In this session, Bitcasa CEO, Brian Taptich will outline the impact of this dilemma on the future functionality and user experience of an app, and also discuss why the next generation of apps will require better file systems that offer broad capabilities, performance, security and scalability, and most importantly, developer control of user data and experience.

Learning Objectives

  • To explain the benefits of utilizing the cloud for developers
  • To dissect the various options developers have when utilizing cloud storage
  • Why choosing the right platform is imperative for user experience and privacy
  • Why owning user data is important if developers want to own the customer

How to Test CDMI Extension Feature Like LTFS, Data Deduplication, and OVF, Partial – Value Copy Functionality: Challenges, Solutions and Best Practice?

Sachin Goswami, Solution Architect and Storage COE Head Hi Tech, TCS

Abstract

The Cloud Storage space has been outperforming the industry expectations as is evident in several Industry report. SNIA provided Cloud Data Management Interface (CDMI) specification is increasingly being adopted as a standard across the cloud.

The popularity of the CDMI specification can be judged by the present cloud storage market being flooded with CDMI server based products offered by many big and small cloud storage vendors. The SNIA Cloud Storage Technical Workgroup has been unceasingly working to address all storage challenges that exist in the storage domain. It is striving to provide support and solution for Data Deduplication, Open virtualization format (OVF), Partial Upload, server side partial value copy and LTFS as a primary cloud storage, managing Latency as well as backup and archival solution. TCS is focusing on maturing the Conformance Test Suite by adding more enhancements. In this proposal we will share the approach TCS will adopt to overcome the challenges in testing of LTFS integration with CDMI, Data Deduplication, partial upload on Server and Open Vitalization format (OVF) of CDMI and Non-CDMI based scenarios of the cloud products. Additionally, we will also be sharing challenges faced / learnings gathered from testing of CDMI Products for conformance. These learnings will help serve as a ready reference for organizations developing LTFS, Data Deduplication, OVF and partial upload in CDMI, Non-CDMI based product suite.

Learning Objectives

  • Understanding to develop test specification on LTFS Export and Test.
  • Understanding how to develop Test Specification on Server Side Partial-Value Copy Specification and Test.
  • Understanding to develop test specification on OVF and Partial Upload and Test.

 

CLOUD AND FILES

 

 

Big Data Analytics on Object Stoage - Hadoop Over Ceph Object Storage with SSD Cache

Yuan Zhou, Software Engineer, Intel Asia R&D

Abstract

Cloud object store provides the ability to store objects across multiple datacenters over a straightforward HTTPS REST API. The namespace is hierarchical and can be searched. Objects can be arbitraiy large and numerous. The deployment can also be done on a commodity-harware based. This makes them an attractive option for archiving large amounts of data that are produced in science and industry. To analyze the data, advanced analytics such as MapReduce can be used. However, copying the data from the object store into distributed file system that the analytics system requires directly on object stores greatly improves usability and performance. In this work, we study the possibility of running Hadoop over Ceph Object Storage and identify common problems.


GlusterFS - The Thrilla in Manila

Ramnath Sai Sagar, Cloud Technical Marketing Manager, Mellanox
Veda Shankar, Technical Marketing Manager, RedHat

Abstract

With an estimated 15 billion devices connected to internet in 2015 generating exabytes of data, there is a huge influx in the amount of data generated, that puts a severe stress on the underlying storage. This explosion in data growth along with corresponding expenditure on infrastructure has catalyzed the need for a fundamental shift in the way we look at storage. The solution to this problem is Software Defined Storage (SDS), which is also the natural fit for the cloud model. OpenStack is fairly mature for block storage (Cinder) and object storage (Swift), but many current business applications require file storage. Enter Manila - the new service that provides an automated, on-demand and scalable service for delivering shared and distributed file systems all using a standardized API native to OpenStack. From the set of available Manila drivers, we’ll focus today on GlusterFS, the founding SDS in the Manila project and an open, and proven distributed storage system. In this presentation, we’ll review Manila backed by the GlusterFS scale-out storage system and explore the value of RDMA in SDS system like GlusterFS in certain use cases.

 

CLOUD AND INTEROP

 

 

What You Need to Know on Cloud Storage

David Slik, Technical Director, NetApp
Mark Carlson, Principal Engineer, Industry Standards, Toshiba

Abstract

This session assumes no prior knowledge on cloud storage and is intended to bring a storage developer up to speed on the concepts, conventions and standards in this space. The session will include a live demo of a storage cloud operating to reinforce the concepts presented.


Using REST API for Management Integration

Brian Mason, MTS-SW, NetApp

Abstract

Integration is key to managing storage systems today. Customers do not want a vendor lock-in or vendor specific management tools. They want to use their best in class management tools and have various storage systems integrate into their management tools. A REST API to your storage system is an absolute must in today's market. REST is the common denominator for management integration. Fortunately it is rather simple to create a REST API. It is a little harder to get one just right and to get the documentation done in a usable form.

Learning Objectives

  • What is a REST API? How are they different from previous API protocols? Why are they so useful?
  • Technology Primer for REST
  • How to build a REST API
  • Documentation Standards
  • Using a REST API as a client

SNIA Tutorial:
Windows Interoperability Workshop

Christopher Hertel, SW Senior Program Engineer, Samba Team / Dell Compellent

Abstract

Windows and POSIX are different, and bridging the gap between the two—particularly with Network File Systems—can be a confusing and daunting endeavor ...and annoying, too.

This tutorial will provide an overview of the SMB3 network file protocol (the heart and soul of Windows Interoperability) and describe some of the unique and powerful features that SMB3 provides. We will also point out and discuss some of the other protocols and services that are integrated with SMB3 (such as PeerDist), and show how the different pieces are stapled together and made to fly. The tutorial will also cover the general structure of Microsoft's protocol documentation, the best available cartography for those lost in the Interoperability Jungle. Some simple code examples will be used sparingly as examples, wherever it may seem clever and useful to do so.

Learning Objectives

  • Become familiar with the Windows Interoperability Ecosystem
  • Better understand Microsoft's Specifications
  • Identify Windows-specific semantic details

 

DATA CENTER INFRASTRUCTURE

 

 

Next Generation Data Centers: Hyperconverged Architectures Impact On Storage

Mark OConnell, Distinguished Engineer, EMC

Abstract

A modern data center typically contains a number of specialized storage systems which provide centralized storage for a large collection of data center applications. These specialized systems were designed and implemented as a solution to the problems of scalable storage, 24x7 data access, centralized data protection, centralized disaster protection strategies, and more. While these issues remain in the data center environment, new applications, new workload profiles, and the changing economics of computing have introduced new demands on the storage system which drive towards new architectures, and ultimately towards a hyperconverged architecture. After reviewing what a hyperconverged architecture is and the building blocks in use in such architectures, there will be some predictions for the future of such architectures.

Learning Objectives

  • What is a hyperconverged architecture
  • How hyperconverged architectures differ from traditional architectures
  • What technologies are being used to build hyperconverged architectures
  • What workloads are appropriate for hyperconverged architectures

PCI Express: Driving the Future of Storage

Ramin Neshati, PCI-SIG Board Member and Marketing Chair, PCI-SIG

Abstract

The data explosion has led to a corresponding explosion in the demand for storage. At the same time, traditional storage interconnects such as SATA are being replaced with PCI Express (PCIe)-attached storage solutions. Leveraging PCIe technology removes the performance bottlenecks and provides long-term bandwidth and performance scalability as PCIe evolves from 8GT/s bit rate to 16GT/s and beyond. PCIe-attached storage delivers a robust solution that is supported natively in all Operating Systems and a wide array of form factors either chip-to-chip or through expansion modules and daughter cards.

Learning Objectives

  • Gain insight into PCI Express technology and how it is used in storage solutions
  • Learn how PCI Express technology advancements in lowering active and idle power can be used in your storage solution
  • Learn how PCIe 3.0 and PCIe 4.0 provides a strategic solution for storage attachment
  • Understand the work the PCI-SIG is doing in new form factors for driving broad adoption in storage applications

Next Generation Low Latency Storage Area Networks

Rupin Mohan, Chief Technologist - Storage Networking, HP

Abstract

In this session, we will present a current (FC, FCoE and iSCSI) and future state (iSER, RDMA, NVMe, and more) of the union of the next generation low latency Storage Area Networks (SAN's) and discuss how the future of SAN's protocols will look like for block, file and object storage.


The Pros and Cons of Developing Erasure Coding and Replication Instead of Traditional RAID in Next-Generation Storage Platforms

Abhijith Shenoy, Engineer, Hedvig

Abstract

Scale-out, hyperconverged, hyperscale, software-defined, hybrid arrays – the list of scalable and distributed storage systems is rapidly growing. But all of these innovations require tough choices on how best to protect data. Moreover, the abundance of 4- 8- and even 10-TB drives makes the traditional approach of RAID untenable because repairing drive failures can take days and even weeks depending on the architecture and drive capacity. New approaches that balance performance with availability are needed. Erasure coding and replication are emerging, rapidly maturing techniques that empower developers with new data protection methods.

This session will discuss the pros and cons of erasure coding and replication versus traditional RAID techniques. Specifically, this session will discuss the performance vs. availability tradeoffs with each technique as well as present and in-depth look at using tunable replication as the ideal data protection solution, as proven by large-scale distributed systems.

Learning Objectives

  • Attendees will learn why RAID isn’t adequate in next-gen storage architectures.
  • Attendees will learn the pros and cons of erasure coding and replication in newer storage architectures.
  • Attendees will learn how replicating at a per-volume basis provides the best mix of performance and availability.
  • Attendees will learn tips and best practices for incorporating new data protection methods into storage platforms.

DATABASE

 

 

The Lightning Memory–Mapped Database

Howard Chu, CTO, Symas

Abstract

The Lightning Memory-Mapped Database (LMDB) was introduced at LDAPCon 2011 and has been enjoying tremendous success in the intervening time. LMDB was written for the OpenLDAP Project and has proved to be the world's smallest, fastest, and most reliable transactional embedded data store. It has cemented OpenLDAP's position as world's fastest directory server, and its adoption outside the OpenLDAP Project continues to grow, with a wide range of applications including big data services, crypto-currencies, machine learning, and many others.

The talk will cover highlights of the LMDB design as well as the impact of LMDB on other projects.

Learning Objectives

  • Highlight problems with traditional DB storage designs
  • Explain benefits of single-level-store
  • Explain corruption-proof design and implementation
  • Compare and contrast leading data structures: B+tree, LSM, Fractal Trees

The Bw-Tree Key-Value Store and Its Applications to Server/Cloud Data Management in Production

Sudipta Sengupta, Principal Research Scientist, Microsoft Research

Abstract

The Bw-Tree is an ordered key-value store, built by layering a B-tree form access method over a cache/storage sub-system (LLAMA) that is lock-free and organizes storage in a log-structured manner. It is designed to optimize performance on modern hardware, specifically (i) multi-core processors with multi-level memory/cache hierarchy, and (ii) flash memory based SSDs with fast random reads (but inefficient random write performance). The Bw-Tree is shipping in three of Microsoft’s server/cloud products – as the key sequential index in SQL Server Hekaton (main memory database), as the indexing engine inside Azure DocumentDB (distributed document-oriented store), and as an ordered key-value store in Bing ObjectStore (distributed back-end supporting many properties in Bing).

Learning Objectives

  • Bw-Tree data structure
  • Lock-free design for high concurrency
  • Log-structured storage design for flash based SSDs
  • Page-oriented store (LLAMA) for building access methods on top
  • Bw-Tree Applications in Production at Microsoft

IMDB NDP Advances

Gil Russell, Principal Analyst, Semiscape

Abstract

In-Memory Database appliances are rapidly evolving, becoming in effect the main operating stored image for both analytic and cognitive computing applications in the next generation of data center and cloud in-rack storage.

Co-opting of DRAM with proximal NAND-Flash mass storage being combined with Near Data Processing re-imagines the entire computing paradigm by effectively turning an entire database image into a content-addressable look alike. Candidates for Storage Class Memory are nearing market introduction and with Near Data Processing abilities will radically change Database Management Systems.

Learning Objectives

  • An understanding of performance metrics for IMDB systems
  • The importance of silicon photonic interconnects
  • The evolution to a low latency high bandwidth DB environment
  • Competing elements for market supremacy.
  • Cognitive computing - the next step

 

 

DEDUPLICATION

 

 

Taxonomy of Differential Compression

Liwei Ren, Scientific Adviser, Trend Micro

Abstract

Differential compression (aka, delta encoding) is a special category for data de-duplication. It can find many applications in various domains such as data backup, software revision control systems, software incremental update, file synchronization over network, to name just a few. This talk will introduce a taxonomy of how to categorize delta encoding schemes in various applications. Pros and cons of each scheme will be investigated in depth.

Learning Objectives

  • Why do we need differential compression?
  • A mathematical model for describing the differences between two files
  • A taxonomy for categorizing differential compression
  • Analysis for practical applications

Design Decisions and Repercussions of Compression and Data Reduction in a Storage Array

Chris Golden, Software Engineer, Pure Storage

Abstract

All flash arrays incorporate a number of data reduction techniques to increase effective capacity and reduce overall storage costs. Compression and deduplication are two commonly employed techniques, each with multiple different strategies for implementation. Because compression and data reduction are only part of a greater data reduction strategy, one must also understand their codependent interactions with the rest of a storage system. This talk presents a structured overview of multiple different compression and deduplication technologies. The basics of each technique are presented alongside their benefits, drawbacks and impact on overall system design. This talk then augments that understanding by applying these various techniques to a sample real-world workload, demonstrating the impact of these decisions in practice.

Learning Objectives

  • Gain a deeper understanding of compression, deduplication and storage layout
  • Benefits and drawbacks of using data reduction techniques in a flash storage array
  • Examination of co-dependent interactions between various data reduction techniques and workload
  • Supplementing theory with practice via analysis of a sample workload

DISTRIBUTED SYSTEMS

 

 

DAOS – An Architecture for Extreme Scale Storage

Eric Barton, Lead Architect of the High Performance Data Division, Intel

Abstract

Three emerging trends must be considered when assessing how storage should operate at extreme scale. First, continuing expansion in the volume of data to be stored is accompanied by increasing complexity in the metadata to be stored with it and queries to be executed on it. Second, ever increasing core and node counts require corresponding scaling of application concurrency while simultaneously increasing the frequency of hardware failure. Third, new NVRAM technologies allow storage, accessible at extremely fine grain and low latency, to be distributed across the entire cluster fabric to exploit full cross-sectional bandwidth. This talk describes Distributed Application Object Storage (DAOS) – a new storage architecture that Intel is developing to address the functionality, scalability and resilience issues and exploit the performance opportunities presented by these emerging trends.

Learning Objectives

  • Exascale / Big Data
  • Scalable Distributed Storage Systems
  • Object Storage
  • Persistent Memory

New Consistent Hashing Algorithms for Data Storage

Jason Resch, Software Architect, Cleversafe, Inc.

Abstract

Consistent Hashing provides a mechanism through which independent actors in a distributed system can reach an agreement about where a resource is, who is responsible for its access or storage, and even derive deterministically a prioritized list of fall-backs should the primary location be down. Moreover, consistent hashing allows aspects of the system to change dynamically while minimizing disruptions. We've recently developed a new consistent hashing algorithm, which we call the Weighted Rendezvous Hash. Its primary advantage is that it obtains provably minimum disruption during changes to a data storage system. This presentation will introduce this algorithm for the first time, and consider several of its applications.

Learning Objectives

  • What is Consistent Hashing
  • Traditional applications of consistent hashing
  • The implementation of the new algorithm: Weighted Rendezvous Hash
  • Why Weighted Rendezvous Hash is more efficient than previous algorithms
  • Applications of Weighted Rendezvous Hash in data storage systems

Beyond Consistent Hashing and TCP: Vastly Scalable Load Balanced Storage Clustering

Alex Aizman, CTO and Founder, Nexenta Systems
Caitlin Bestler, Senior Director of Arch, Nexenta Systems

Abstract

Successive generations of storage solutions have increased decentralization. Early NAS systems made all the decisions on a single server, down to sector assignment. Federated NAS enabled dynamic distribution of the namespace across multiple storage serves. The first Object Clusters delegated CRUD-based management of both object metadata and data to OSDs.

Current generation of Object Clusters uses Consistent Hashing to eliminate the need for central metadata. However, Consistent Hashing and its derivatives, combined with the prevalent use of TCP/IP in storage clusters results in performance hot spots and bottlenecks, diminished scale-out capability and dis-balances in resource utilization.

These shortcomings will be demonstrated with a simulation of a large storage cluster. An alternative next generation strategy that simultaneously optimizes available IOPS "budget" of the back-end storage, storage capacity, and network utilization will be explained. Practically unlimited load-balanced scale-out capability using Layer 5 (Replicast) protocol for Multicast Replication within the cluster will be presented.

Learning Objectives

  • Why neither Consistent Hashing nor TCP/IP scale
  • How CCOW (Cloud Copy-on-Write) and Replicast provide for infinite scale-out
  • Impact on IOPS, Storage Capacity and Network utilization
  • Simulation: queuing and congestion in a 4000 node cluster Actual
  • Results: measured with a more affordable cluster

Real World Use Cases for Tachyon, a Memory-Centric Distributed Storage System

Haoyuan Li, CEO, Tachyon Nexus

Abstract

Memory is the key to fast big data processing. This has been realized by many, and frameworks such as Spark and Shark already leverage memory performance. As data sets continue to grow, storage is increasingly becoming a critical bottleneck in many workloads.

To address this need, we have developed Tachyon, a memory-centric fault-tolerant distributed storage system, which enables reliable file sharing at memory-speed across cluster frameworks such as Apache Spark, MapReduce, and Apache Flink. The result of over three years of research and development, Tachyon achieves both memory-speed and fault tolerance.

Tachyon is Hadoop compatible. Existing Spark, MapReduce, Flink programs can run on top of it without any code changes. Tachyon is the default off-heap option in Spark. The project is open source and is already deployed at many companies in production. In addition, Tachyon has more than 100 contributors from over 30 institutions, including Yahoo, Tachyon Nexus, Redhat, Baidu, Intel, and IBM. The project is the storage layer of the Berkeley Data Analytics Stack (BDAS) and also part of the Fedora distribution.

In this talk, we give an overview of Tachyon, as well as several use cases we have seen in the real world.


Where Moore's Law Meets the Speed of Light: Optimizing Exabyte-Scale Network Protocols

Yogesh Vedpathak, Software Developer, Cleversafe

Abstract

Scalability is critically important to distributed storage systems. Exabyte-scale storage is already on the horizon and such systems involve tens of thousands of nodes. But today’s Internet protocols were never designed to handle such cases. In systems this big it's impossible to maintain connections and sessions with every storage device. However, multiple round trip connection setups, TLS handshakes, and authentication mechanisms, compounded with the unyielding speed of light and geo-dispersed topologies create a perfect storm for high latency and bad performance. In this presentation we explore the available options to achieve security, performance, and low latency in a system where persistent sessions are an unaffordable luxury.

Learning Objectives

  • Limitations of today’s Internet protocols in large distributed systems
  • How to implement a secure, connectionless, single-round-trip network protocol
  • What the network topology will look like in an exabyte-scale globally dispersed storage system

/etc

 

 

 

 

Implications of Emerging Storage Technologies on Massive Scale Simulation Based Visual Effects

Yahya H. Mirza, CEO/CTO, Aclectic Systems Inc

Abstract

As the feature film industry moves towards higher resolution frames sizes from 4K to 8K and beyond, physically based visual effects such as smoke, fire, water, explosions, etc. demand higher resolution simulation grids. A common rule of thumb for final renders is that one voxel is utilized per pixel. Thus a 4K (4096 x 2160 pixels) frame may require simulation on a massive sized grid, resulting in significant compute and I/O costs. The result leads to unacceptable turn-around times and increased production costs. This presentation will overview the tools, production pipeline and production relevant open source and proprietary physical simulation software utilized by major feature film studios to create blockbuster or “tent pole” productions.

Aclectic Systems Inc. (Acletic) is developing ColossusTM, a custom hardware / software integrated solution to dramatically speed up massive scale physically based visual effects. To achieve our performance goals, Aclectic is taking a systems approach which accelerates simulation, volume rendering and I/O. This presentation will overview the tools, production pipeline and production relevant open source software utilized by major feature film studios. Throughout, a discussion will be intertwined about how emerging storage technologies such as FLASH, NVMe and NVMe over Fabric could play a part in a future integrated solution used to lower production costs.


How Did Human Cells Build a Storage Engine?

Sanjay Joshi, CTO Life Sciences, EMC Emerging Technologies Division

Abstract

The eukaryotic cell is a fascinating piece of biological machinery – storage is at its heart, literally, within the nucleus. This presentation will tell a story of the evolution of the storage portion of the human cell and its present capacity and properties that could be "bio-mimicked" for future digital storage systems, especially deep archives.

Learning Objectives

  • Biological storage concepts
  • Requirements for a storage 'unit'
  • Requirements for replication
  • Requirements for error correction
  • Power management

Apache Ignite - In-Memory Data Fabric

Dmitriy Setrakyan, VP of Engineering, GridGain Systems

Abstract

This presentation will provide a deep dive into new Apache project: Apache Ignite. Apache Ignite is the in-memory data fabric that combines industry first distributed and fault-tolerant in-memory file system, in-memory cluster and computing, in-memory data grid and in-memory streaming under one umbrella of a fabric. In-memory data fabric slides between applications and various data sources and provides ultimate data storage to the applications.

Apache Ignite is the first general purpose in-memory computing platform in Apache Software Foundation family. We believe it will have same effect on Fast Data processing as Hadoop has on Big Data processing. Better understanding of inner details behind Apache Ignite will hopefully encourage more companies and individual committers to join the project.

Learning Objectives

  • Learn about industry leading in-memory data fabric

Integrity of In-memory Data Mirroring in Distributed Systems

Tejas Wanjari, Senior Software Engineer, EMC Data Domain

Abstract

Data in memory could be in a modified state than its on-disk copy. Also, unlike the on-disk copy, the in-memory data might not be checksummed, replicated or backed-up, every time it is modified. So the data must be checksummed before mirroring to avoid network corruptions. But checksumming the data in the application has other overheads: It must handle networking functionalities like retransmission, congestion, etc. Secondly, if it delays the validation of mirrored data, it might be difficult to recover the correct state of the system.

Mirrored-data integrity as transport protocol functionality leads to modular design and better performance. We propose a novel approach that utilizes TCP with MD5 signatures to handle the network integrity overhead. Thus, the application can focus on its primary task. We discuss the evaluation and use-case of this approach (NVM mirroring in Data Domain HA) to prove its advantages over conventional approach of checksumming in the application.

Learning Objectives

  • Designing efficient data-mirroring in backup and recovery systems, where reliability is prime
  • Linux kernel TCP know-how for using it with MD5 option
  • Analysis of conventional approach vs. the TCP MD5
  • Use-case: TCP MD5 option for NVM mirroring in Data Domain HA

FILE SYSTEMS

 

 

 

 

Learnings from Creating Plugin Module for OpenStack Manila Services

Vinod Eswaraprasad, Software Architect, Wipro

Abstract

Manila is the file sharing service for OpenStack . Manila provides the management of file shares (for example, NFS and CIFS) as a core service to OpenStack. Manila services, like all other openstack services follows a pluggable architecture, and it provides a management of a shared file system instances. This paper discusses our work on integrating a multi-protocol NAS storage device to the OpenStack Manila service. We look at the architecture principle behind the scalability and modularity of Manila services, and the analysis of interface extensions required to integrate a typical NAS head. We also take a deeper look at a NAS file share management interfaces required for a software defined storage controller within the OpenStack Manila framework.

Learning Objectives

  • OpenStack File sharing service architecture
  • The API and integration framework for openstack services
  • NAS share management - and integration
  • SDS - interfaces required for a NAS device

Leveraging BTRFS, Linux and Open Source in Developing Advanced Storage Solutions

Suman Chakravartula, Maintainer, Rockstor

Abstract

The future of Linux filesystems is here with the emergence of BTRFS. Other advancements in Linux combined with BTRFS provide a robust OS platform for developers. Features from CoW snapshots, robust software raid, data protection to compression, dedup and efficient replication -- just to name a few, are accessible to developers. These Linux OS level advancements combined with proven application level open source tools and libraries give developers a lot of horsepower and raw material to build creative and powerful solutions.

Learning Objectives

  • Learn to develop with BTRFS
  • Learn about new storage related advancements in Linux
  • Learn about challenges developing scalable storage solutions using open source
  • Learn about application level opensource tools and libraries that help storage development
  • Learn about open source storage ecosystem

Apache HDFS: Latest Developments and Trends

Jakob Homan, Distributed Systems Engineer, Microsoft

Abstract

During the past two years, HDFS has been rapidly developed to meet the needs of enterprise and cloud customers. We'll take a look at the new features, their implementations and how they address previous shortcomings of HDFS.


How to Enable a Reliable and Economic Cloud Storage Solution by Integrating SSD with LTFS – Addressing Challenges and Best Practices

Ankit Agrawal, Solution Developer, TCS
Sachin Goswami, TCS

Abstract

IT industry is constantly evolving by transforming thoughts into cutting edge products and solutions to provide better services to the customers. Linear Tape File System (LTFS) is one such file system that overcomes the drawbacks of the traditional tape storage technology such as sequential navigation. SNIA’s LTFS Technical work group is adapting to emerging market needs and developing/enhancing Liner Tape File System (LTFS) specifications for tape technology.

TCS has also started working on some ideas in LTFS space and in this proposal we will share our views on how to integrate SSD as a cache with LTFS tape system to transparently deliver the best benefits for Object Base storage. This combination will allow us to deliver a reliable and economic storage solution without sacrificing performance. We will also talk about the potential challenges in our approach and best practices that can be adopted to overcome these challenges

Learning Objectives

  • Understanding LTFS specification provided by SNIA
  • Understanding SSD functionality
  • Understanding in integration LTFS with SSD

A Pausable File System

James Cain, Principal Software Architect, Quantel Limited

Abstract

As storage developers we are all obsessed with speed. This talk gives a different take on speed – how slow can we go? Can we even stop? If so for how long? The talk will also analyze why this is interesting, and demonstrate that the file system interface – and the way all software depends upon it – is one of the most powerful abstractions in operating systems.

The presenter will use his own implementation of an SMB3 server (running in user mode on Windows) to demonstrate the effects of marking messages as asynchronously handled and then delaying responses – in order to build up a complete understanding of the semantics offered by a pausable file system.

This exploration of the semantics of slow responses will demonstrate that researching slowing down can bear as much fruit as speeding up!

Learning Objectives

  • Understanding The Inversion of Control design pattern and how it can be applied to an implementation of a file server.
  • Exploring how the file system interface can be seen as a contract and that the semantics of that contract can be exploited for innovative uses.
  • Demonstrating that implementing a fairly small subset of SMB3 as a server is enough to conduct research in file systems.

Storage Solutions for Tomorrow's Physics Projects

Ulrich Fuchs, Service Manager, CERN

Abstract

The unique challenges in the field of nuclear high energy physics are already pushing the limits of storage solutions today, however, the projects planned for the next ten years call for storage capacities, performance and access patterns that exceed the limits of many of today's solutions.

This talk will present the limitations in network and storage and suggest possible architectures for tomorrow's storage implementations in this field and show results of first performance tests done on various solutions (Lustre, NFS, Block Object storage, GPFS ..) for typical application access patterns.

Learning Objectives

  • Shared file system and storage performance requirements in science workloads
  • Setup and results of performance measurements of different file systems: the LUSTRE FS, NFS, BOS, GPFS
  • Technology differences between several file systems and storage solutions

Storage Class Memory Support in the Windows Operating System

Neal Christiansen, Principal Development Lead, Microsoft

Abstract

This will describe the changes being made to the Windows OS, its file systems and storage stack in response to new evolving storage technologies.

Learning Objectives

  • How windows is adapting to new storage technologies

ZFS Async Replication Enhancements

Richard Morris, Principal Software Engineer, Oracle
Peter Cudhea, Principal Software Engineer, Oracle

Abstract

This presentation explores some design decisions around enhancing the zfs send and zfs receive commands to transfer already compressed data more efficiently and to recover from failures without re-sending data that has already been received.

Learning Objectives

  • High level understanding - how ZFS provides an efficient platform for async replication
  • Finding stability in the chaos - tension between what's stable in an archive and what isn't
  • Resolving significant constraints - why something simple turned out not to be not so simple

ReFS v2: Cloning, Projecting, and Moving Data

J.R. Tipton, Development Lead, Microsoft

Abstract

File systems are fundamentally about wrapping abstractions around data: files are really just named data blocks. ReFS v2 presents just a couple new abstractions that open up greater control for applications and virtualization.

We'll cover block projection and cloning as well as in-line data tiering. Block projection makes it easy to efficiently build simple concepts like file splitting and copying as well as more complex ones like efficient VM snapshots. Inline data tiering brings efficient data tiering to virtualization and OLTP workloads.

Learning Objectives

  • How ReFS v2 exploits its metadata store to project file blocks at a fine granularity
  • How metadata is managed in ReFS v2
  • How data movement between tiers can happen efficiently while maintaining data integrity

Achieving Coherent and Aggressive Client Caching in Gluster, a Distributed System

Poornima Gurusiddaiah, Software Engineer, Red Hat
Soumya Koduri, Red Hat

Abstract

The presentation will be about how to implement:

  • File system notifications
  • Leases

In a distributed system and how these can be leveraged to implement a client side coherent and aggressive caching.

 

Learning Objectives

  • Designing File system notification in Distributed System
  • Designing leases in Distributed System
  • Designing client caching in Distributed file system
  • Gluster and the benefits of xlator modeling

Petabyte-scale Distributed File Systems in Open Source Land: KFS Evolution

Sriram Rao, Partner Scientist Manager, Microsoft

Abstract

Over the past decade, distributed file systems based on a scale-out architecture that enables managing massive amounts of storage space (petabytes) have become commonplace. In this talk, I will first provide an overview of OSS systems (such as HDFS and KFS) in this space. I will then describe how these systems have evolved to take advantage of increasing network bandwidth in data center settings to improve application performance as well as storage efficiency. I will talk about these aspects by highlighting two novel features, multi-writer atomic append and (time-permitting) distributed erasure coding. These capabilities have been implemented in KFS and deployed in production settings to run analytic workloads.


High Resiliency Parallel NAS Cluster

Richard Levy, CEO and President, Peer Fusion

Abstract

The PFFS is a POSIX compliant parallel file system capable of high resiliency and scalability. The user data is dispersed across the cluster with no replication thus providing significant savings.The resiliency level is selected by the user. Peer failures do not disrupt applications as the cluster automatically performs on the fly repairs as required for read and write operations to complete successfully (applications can read and write data from failed peers). There are two main protocols for communication between peers: the CLI protocol for namespace type commands (e.g. link, unlink, symlink, mkdir, rmdir, etc.) and the MBP protocol for file I/O. Both protocols are highly efficient and produce very little chatter. They rely on multicast and inference to preserve efficient scalability as the peer count grows large. The software is highly threaded to parallelize network I/O, disk I/O and computation. Gateways provide access to the cluster by exporting VFS semantics and are accessible to both NFS and CIFS. Gateways have no persistent user data as the peers are the only persistent repository. The configuration and administration of the cluster (both gateways and peers) is very simple and consist of a small text file of a few lines. Healing the cluster is highly efficient as they walk the file system and so only process occupied data blocks.

Learning Objectives

  • Resiliency design considerations for large clusters
  • The efficient use of multicast for scalability
  • Large clusters must administer themselves
  • Fault injection when failures are the nominal condition
  • Next step: 64K peers

Bridging On-Premises File Systems and Cloud Storage

Pankaj Datta, Consultant Software Engineer, Isilon Storage Division EMC

Abstract

Today cloud storage is playing an increasingly important role in customers’ storage need because of its attractive cost, scalability, agility and data protection features. Cloud storage services are consumed by REST based protocols. However most NAS storage in data centers is consumed by majority of the applications through SMB or NFS protocols. Customers are looking for ways to extend their NAS storage to the cloud storage to capture the benefits without impacting their existing applications’ workflow. Isilon built a solution to transparently moves file data from on-premise storage to the cloud while preserving the full namespace access in local file systems. This presentation discusses the challenges, subtle issues and the ways to address doing that.

Learning Objectives

  • Architecture deep dive
  • File policy based approach to identify inactive data with maximum flexibility
  • Solve the eventual consistency challenges
  • Help the customer to control cost and security
  • Inter-op challenges with local file system features (snapshots, replication and backup)

Cache Service In Distributed FileSystem

Zhongbing Yang, System Architect, Huawei

Abstract

Cache Service is a new architecture to implement “cache” in storage system. There are two reasons to trigger this architecture, one is many products want to use same cache architecture and another reason is there are many “cache requirements” in one storage product. As an example, a scale-out NAS system and a detail cache design based on “Cache Service” for this scale-out NAS are introduced and to explain how this cache service could work for different product and different clients in one product.

Learning Objectives

  • New Cache Service architecture
  • Learn how multiple cache instances run in Cache Service
  • Learn NVDIMM+RAM Cache design

HARDWARE

 

 

PCI Express Non-Transparent Bridging for RDMA

Roland Dreier, Member of Technical Staff, Pure Storage

Abstract

Previous generations of the Pure Storage FlashArray used InfiniBand RDMA as a cluster interconnect between storage controllers in a system. The current generation replaces this with PCI Express Non-Transparent Bridging. We will describe how we preserved the key attributes of high throughput, low latency, CPU offloaded data movement and kernel bypass while moving the interconnect from a discrete IB adapter to a CPU-integrated PCIe port using new technologies including Linux vfio and PCIe NTB.

Learning Objectives

  • Key attributes of an RDMA transport
  • Description of PCIe NTB
  • Implementation of RDMA on PCIe NTB

RAIDShield: Characterizing, Monitoring, and Pro-actively Protecting Against Disk Failures

Ao Ma, Principal Engineer, EMC

Abstract

Modern storage systems orchestrate a group of disks to achieve their performance and reliability goals. Even though such systems are designed to withstand the failure of individual disks, failure of multiple disks poses a unique set of challenges. We empirically investigate disk failure data from a large number of production systems, specifically focusing on the impact of disk failures on RAID storage systems. Our data covers about one million SATA disks from 6 disk models for periods up to 5 years. We show how observed disk failures weaken the protection provided by RAID. The count of reallocated sectors correlates strongly with impending failures.

Learning Objectives

  • Empirical investigation of hard disk failures in production systems
  • Proactive protection of individual disk drives
  • Proactive protection of RAID storage system
  • Deployment results of the proactive protection in production system

The Changing Storage Testing Landscape

Peter Murray, Product Evangelist, Load DynamiX

Abstract

All-flash arrays, server-side storage arrays and traditional arrays - all have a place in today's data center. But one thing is very clear: effectively testing these solutions requires new and evolving thinking. We need to implement new testing capabilities and practices to ensure both engineers and QA can meet their needs with unbiased, actionable data.

In this session, we'll examine the various types of testing tools currently available and some that are evolving. Whether limits-testing tools, benchmarks, traditional disk-testing tools, workload-generation tools, all must be able to test with high performance and high configurability for multi-LUN and multi-volume testing and data content.

We'll examine the changing landscape of workload acquisition and workload generation, especially for all-flash arrays that are increasingly being used with multiple applications. We'll see how the application I/O blender effect, although seemingly random on the surface, actually contains patterns that can and should be emulated to ensure an array is properly shaken down before it goes into service.

We'll touch on how rich statistical reporting and trending help testers to see how performance varies under changing conditions, showing how an array performs at varying levels of user loads, queue depths, and compound/outstanding requests.

Learning Objectives

  • Examine the various testing tools on the market
  • New testing capabilities and practices tEngineers and QA can validate their solutions with unbiased, actionable data
  • Learn how statistical reporting and trending help testers to see how performance varies under changing conditions

KEY NOTE AND FEATURED SPEAKERS

 

 

Innovator, Disruptor or Laggard, Where Will Your Storage Applications Live? Next Generation Storage

Bev Crair, Vice President and General Manager, Storage Group, Intel

Abstract

With the emergence of Cloud Service Providers, new technology innovations, and elevated customer expectations, Enterprise Information Technologists continue to be faced with scalability and cost pressures. Through next-generation efficient, cost-effective scalable architectures, IT is transforming from a Cost-Center Service Provider to a valued Business Partner.

It's imperative that storage developers understand the drivers of this transition, how to leverage the open source community, embrace next generation memory storage and cloud service provider best practices in response to their infrastructure and workloads. In this session, Vice President and General Manager, Bev Crair, will discuss the leadership role Intel(r) is playing in driving the open source community for software defined storage, server based storage and upcoming technologies that will shift how storage is architected.

Learning Objectives

  • Understanding the role of software development in accelerating the storage transformation
  • Understanding of the technology trends in the storage industry
  • Understanding the critical role of server based storage in a software defined storage
  • Understanding of Intel(r) technologies for storage and the open source community

The Long-Term Future of Solid State Storage

Jim Handy, General Director, Objective Analysis

Abstract

Today solid-state storage is an extension of established storage technologies extended by hiding flash behind existing hardware & software protocol layers. Over time these layers will be abandoned in favor of new architectures.

This presentation will examine research of new solid state memory and storage types, and new means of integrating them into highly-optimized computing architectures. This will lead to a discussion of the way that these will impact the market for computing equipment.

Learning Objectives

  • Understanding how solid state storage arrived at its current state
  • A background on the changes in store for memory chips
  • How computing architectures will change to squeeze the highest performance from these chips
  • How the market will react to these changes

Concepts on Moving From SAS connected JBOD to an Ethernet Connected JBOD

Jim Pinkerton, Partner Architect Lead, Microsoft

Abstract

Today’s Software Defined Storage deployments are dominated by SAS attached just-a-bunch-of-disks (JBOD), with some new drives moving to an Ethernet connected blob store interface. This talk examines the advantages of moving to an Ethernet connected JBOD, what infrastructure has to be in place, what performance requirements are needed to be competitive, and examines technical issues in deploying and managing such a product. The talk concludes with a real world example, including performance analysis.

Learning Objectives

  • Understand the potential advantages of moving to an Ethernet connected JBOD (EBOD)
  • Understand the architectural differences between a JBOD based approach and an EBOD based approach.
  • Understand the complexities that are solved and that need to be solved, anchored in real world data, to be able to deliver an EBOD to the market.

Planning for the Next Decade of NVM Programming

Andy Rudoff, SNIA NVM Programming TWG, Intel

Abstract

We imagine a future where persistent memory is common in the data center. How will enterprise-class applications leverage this resource? How will middleware, libraries, and application run-time environments change? In this talk, Andy will describe how emerging NVM technologies and related research are causing a change to the software development ecosystem. Andy will describe use cases for load/store accessible NVM, some transparent to applications, others non-transparent. Starting with current examples of NVM Programming, Andy will describe where he believes this is leading us, including the likelihood that programmers in the future must comprehend numerous types of memories with different qualities and capabilities.


Software Defined Storage - What Does it Look Like in 3 Years?

Richard McDougall, Big Data and Storage Chief Scientist, VMware

Abstract

Storage is being recast as a set of services implemented in software with industry standard servers, allowing a radical simplification of provisioning and management at a significantly lower cost. At the same time, there are significant changes in flash, memory and server hardware architecture that software can take advantage of.

In this talk, we'll survey and contrast the popular software architectural approaches and investigate the changing hardware architectures upon which these systems are built.

Finally, we'll survey the new applications data services that are becoming main-stream for container and big-data environments.


Why the Storage You Have is Not the Storage Your Data Needs

Laz Vekiarides, CTO and Co-founder, ClearSky Data

Abstract

How much of your data is actually hot? How much are you storing on your hottest tier? If you’re relying on traditional storage systems, the difference between those two responses could be major. Enterprises are investing in the fastest, most expensive options available for their storage without fully understanding the nature of each workload. For instance, many storage developers lack access to storage monitoring tools that base insights on hard evidence, but when you consider the breakdown of most enterprise workloads, the amount of wasted resources you may be working with can be surprising.

It’s imperative that some detailed research be conducted on the nature of enterprise workloads and the placement of hot, warm and cold data before a storage system is built. In this session, CTO of ClearSky Data Laz Vekiarides will share some of the questions every storage architect should ask, such as:

Learning Objectives

  • Is your most expensive data on your most expensive storage tier?
  • What sorts of latencies are your applications seeing?
  • What real-world benefits can be derived from virtualized tiering technology? Where does it fail?

Emerging Trends in Software Development

Donnie Berkholz, Research Director, 451 Research

Abstract

Donnie Berkholz leads the development, DevOps and IT ops team at 451 Research. In this talk, he will draw on his experience and research to discuss emerging trends in how software across the stack is created and deployed, with a particular focus on relevance to storage development and usage. Donnie will discuss the potential impacts of these trends to how storage software is built as well as what kinds of new use cases it needs to support.


Learnings from Nearly a Decade of Building Low-cost Cloud Storage

Gleb Budman, CEO, Backblaze

Abstract

For nearly a decade Backblaze has built one of the lowest-cost cloud storage systems. In this keynote, we will share the philosophies and technologies that made that possible. We'll cover the design of the storage hardware, the cloud storage file system software, and the operations processes that currently store over 150 petabytes and 5 petabytes every month. Whether you build your own storage systems, run storage in-house, or use cloud storage, we hope to provide you concrete takeaways.

Learning Objectives

  • Understanding cloud storage costs
  • Understanding how to build your own storage hardware
  • Understanding cloud storage file system considerations
  • Understanding storage operations processes

MANAGEMENT

 

 

 

 

DMTF Redfish Overview

Jeff Autor, Distinguished Technologist, Hewlett-Packard

Abstract

The DMTF’s Scalable Platforms Management Forum (SPMF) is working to create and publish an open industry standard specification, called “Redfish” for simple, modern and secure systems management using RESTful methods and JSON formatting. This session will cover the design tenets, protocol and payload, expected deliverables and time frames.

Learning Objectives

  • Understanding the Redfish goals
  • Understanding how Redfish applies to storage topics
  • Developing Redfish support
  • Understanding the Redfish Open Source efforts

The State of SMI-S – The Standards Based Approach for Managing Infrastructure

Chris Lionetti, Reference Architect, NetApp

Abstract

SMI-S is the standards-based way to expose, modify, and consume the storage used in data centers. SMI-S can discover storage resources such as RAID groups and primordial disks, it can configure capabilities like thin provisioning, initiator groups and mappings for file shares or exports, and it can be used to monitor the ongoing operations of storage infrastructure.

These activities are cross-vendor and cover end-to-end operations from the host through the switching infrastructure to the storage controllers and down to the logical and physical storage devices. This session will appeal to Data Center Managers, Architects, and Development Managers, and will approach the topic from an ‘Operations’ perspective.

The audience will receive a fundamental grounding in the SMI-S and a clear understanding of its value in a production environment. This session will also address the newly created SMI-S getting started guide.

Learning Objectives

  • Understand Value Prop for SMI-S adherence
  • Know differing approaches to deploy SMI-S
  • Appreciate how consumers gather/user SMI-S data
  • Realistically Plan to develop an SMI-S provider

Enterprise-Grade Array-Based Replication and Disaster Recovery with SMI-S, Windows Server, System Center and Azure Site Recovery

Amit Virmani, Senior Software Engineer, Microsoft Corp
Jeff Li, Senior Software Engineer, Microsoft Corp

Abstract

Microsoft System Center Virtual Machine Manager(SCVMM) automates the complete end to end discovery and integration to leverage replication capabilities provided by our enterprise storage partners using SMI-S. Windows Server provides native support for SMIS providers that SCVMM can utilize. Building on top of SCVMM primitives, Azure Site Recovery provides the end to end disaster recovery and orchestration solution automating the creation and management of all target objects including storage and compute. Microsoft is working with multiple storage partners to deliver this functionality: EMC, NetApp, HP, Hitachi, IBM, Huawei, Dell Compellent and Fujitsu.

Learning Objectives

  • Understand what ASR Disaster Recovery solution means – what is Planned Failover, Unplanned Failover, Test Failover, etc. Learn how SMIS provides the primitives in Replication Services profile
  • Understand how SCVMM leverages the replication discovery and management profile using the developers guide distributed to partners. This includes the changes made in the Replication Services
  • Deep dive into Virtual Machine Manager leveraging Pass Through service in Windows Server 2012 R2 to discover and manage replication capabilities. Capabilities include discovery, replica provisioning
  • Understand how ASR orchestrates the replication automation and learn how replications groups enable multiple virtual machines to be protected and replicated together

 

 

NETWORKING

 

 

 

 

Benefits of NVMe Over Fabrics and Demonstration of a Prototype

Rob Davis, VP of Storage Technology, Mellanox Technologies

Abstract

NVMe offers a faster way to connect to solid state storage than traditional SAS and SATA interfaces, which were designed for spinning disk. It eliminates the SCSI layer and supports better bandwidth, IOPS, and latency than 12Gb SAS. However, traditional NVMe keeps the storage devices “captive” within the server or storage box and does not scale across distance, multiple storage nodes, or hundreds of PCIe devices. NVM Express, Inc. has proposed a standard to support remote access of NVMe devices across high speed, low-latency fabrics. Mellanox will present examples and prototype performance results of running the forthcoming standard over RDMA interconnects such as InfiniBand and RoCE (RDMA over Converged Ethernet).

Learning Objectives

  • Understand the benefits of NVMe over SAS or SATA interfaces.
  • Learn how NVMe works with RDMA on InfiniBand or RoCE fabrics.
  • See test results from an NVMe Over Fabrics prototypes involving NVMe flash devices
  • Understand how NVMe Over Fabrics works with 100Gb Ethernet

Implementing NVMe Over Fabrics

Wael Noureddine, VP Technology, Chelsio Communications

Abstract

NVMe is gaining momentum as the standard high performance disk interface that eliminates various bottlenecks in accessing PCIe SSD devices. NVMe over Fabrics extends NVMe beyond the confines of a PCIe fabric by utilizing a low latency network interconnect such as iWARP RDMA/Ethernet to attach NVMe devices. iWARP is unique in its scalability and reach, practically eliminating constraints on the architecture, size and distance of a storage network. This talk presents Chelsio’s open-source generic block device based implementation and benchmark results that illustrate the benefits in performance and efficiency of the new fabric, opening the way to unprecedented storage performance and scale.

Learning Objectives

  • Introduction to NVMe, NVMe over Fabrics and peer-to-peer communications
  • Develop a complete understanding of Chelsio block device implementation and performance results
  • Learn how iWARP fits well with NVMe

A Cost Effective, High Performance, Highly Scalable, Non-RDMA NVMe Fabric

Bob Hansen, VP Systems Architecture, Apeiron Data Systems

Abstract

Large server count, scale out cluster applications require non-volatile storage performance well beyond the capabilities of legacy storage networking technologies. Until now the only solution has been to load SSDs directly into the cluster servers. This approach delivers excellent raw storage performance, but introduces many disadvantages including: single points of failure, severely limited configuration/provisioning flexibility and added solution cost. This presentation discusses a new, scalable, very high performance storage architecture that delivers all the simplicity and promise of DAS with the efficiency and capability of network storage, at an industry leading cost point.


SNIA Tutorial:
SCSI Standards and Technology Update

Rick Kutcipal, Product Planning - Data Center Storage Group, Avago Technologies
Greg McSorley, Vice President, SCSI Trade Association

Abstract

SCSI continues to be the backbone of enterprise storage deployments and continues to rapidly evolve by adding new features, capabilities, and performance enhancements. This presentation includes an up-to-the-minute recap of the latest additions to the SAS standard and road maps, the status of 12Gb/s SAS deployment, advanced connectivity solutions, MultiLink SAS™, SCSI Express, and 24Gb/s development. Presenters will also provide updates on new SCSI features such as atomic writes, Zoned Block Commands (ZBC) and Storage Intelligence which provides mechanisms for improved efficiency, performance and endurance with solid state devices.

Learning Objectives

  • Attendees will learn how SAS continues to grow and thrive, in part, because of the Advanced Connectivity Roadmap, which offers a solid connectivity scheme based on the versatile Mini-SAS HD connector
  • The latest development status and design guidelines for 12Gb/s SAS will be discussed as well current status for extending SAS to 24Gb/s.
  • Attendees will receive updates on new SCSI feature such as atomic writes, Zoned Block Commands and Storage Intelligence which provides mechanisms for improved efficiency, performance and endurance wit

Growth of the iSCSI RDMA (iSER) Ecosystem

Rob Davis, VP of Storage Technology, Mellanox Technologies

Abstract

iSCSI RDMA (iSER) has been the fastest available block storage protocol for several years but the number of commercially available storage targets has previously limited. Now new storage solutions from vendors such as NetApp are supporting iSER, along with iSER initiators in new environments such as FreeBSD. This makes it easier for both cloud service providers and enterprises to deploy iSER. In addition, improvements to the iSER and Linux SCSI layers allow faster iSER performance than before over both InfiniBand and 40Gb Ethernet links.

Learning Objectives

  • Understand similarities and differences between iSER and standard iSCSI over TCP
  • See the growth of the iSER ecosystem for both storage target and initiator support
  • Learn about new features supported in iSER including T10-DIF, initiator/target discovery over RDMA, and performance optimization.
  • See benchmark results from the latest version of iSER running over 40/56Gb Ethernet and FDR InfiniBand networks
  • See metrics for using iSER with the latest flash storage

FCoE Direct End-Node to End-Node (aka FCoE VN2VN)

John Hufferd, Consultant Hufferd Enterprises

Abstract

A new concept has just been accepted for standardized in the Fibre Channel (T11) standards committee; it is called FCoE VN2VN (aka Direct End-Node to End-Node).

The FCoE standard which specifies the encapsulation of Fibre Channel frames into Ethernet Frames is being extended to permit FCoE connections Directly between FC/FCoE End-Nodes.

The tutorial will show the Fundamentals of the extended FCoE concept that permits it to operate without FC switches or FCoE Switches (aka FCF) and will describe how it might be exploited in Small, Medium or Enterprise Data Center environments -- including the "Cloud" IaaS (Infrastructure as a Service) provider environments.

Learning Objectives

  • The audience will gain a general understanding of the concept of using a Data Center type Ethernet for the transmission of Fibre Channel protocols without the need for an FCoE Forwarder (FCF).
  • The audience will gain an understanding of the benefits of converged I/O and how a Fibre Channel protocol can share an Ethernet network with other Ethernet based protocols, directly between End Nodes
  • The audience will gain understanding of potential business value and config.s that are useful for gaining maximum value, including the value to the "Cloud" IaaS (Infrastructure as a Service) providers

 

 

NEW THINKING

 

 

 

 

Pelican: A Building Block for Exascale Cold Data Storage

Austin Donnelly, Principal Research Software Development Engineer, Microsoft

Abstract

Pelican is a rack-scale design for cheap storage of data which is rarely accessed: cold data. It uses spun-down hard drives to maximise density and reduce costs. A Pelican rack supplies only enough resources (power, cooling, bandwidth) to support the cold data workloads we target, significantly reducing Pelican's total cost of ownership compared to traditional disk-based systems provisioned for peak performance.

The Pelican storage stack manages the limited resources, and their constraints. We describe the data layout and IO scheduling algorithms which ensures these constraints are not violated, while making best use of the available resources. We evaluate Pelican both in simulation and with a full rack, and show that Pelican performs well: delivering both high throughput and acceptable latency.


Torturing Databases for Fun and Profit

Mai Zheng, Assistant Professor Computer Science Department - College of Arts and Sciences, New Mexico State University

Abstract

Programmers use databases when they want a high level of reliability. Specifically, they want the sophisticated ACID (atomicity, consistency, isolation, and durability) protection modern databases provide. However, the ACID properties are far from trivial to provide, particularly when high performance must be achieved. This leads to complex and error-prone code—even at a low defect rate of one bug per thousand lines, the millions of lines of code in a commercial OLTP database can harbor thousands of bugs.

Here we propose a method to expose and diagnose violations of the ACID properties. We focus on an ostensibly easy case: power faults. Our framework includes workloads to exercise the ACID guarantees, a record/replay subsystem to allow the controlled injection of simulated power faults, a ranking algorithm to prioritize where to fault based on our experience, and a multi-layer tracer to diagnose root causes. Using our framework, we study 8 widely-used databases, ranging from open-source key-value stores to high-end commercial OLTP servers. Surprisingly, all 8 databases exhibit erroneous behavior. For the open-source databases, we are able to diagnose the root causes using our tracer, and for the proprietary commercial databases we can reproducibly induce data loss.


Skylight — A Window on Shingled Disk Operation

Peter Desnoyers, Professor College of Computer and Information Science, Northeastern University

Abstract

We introduce Skylight, a novel methodology that combines software and hardware techniques to reverse engineer key properties of drive-managed ShingledMagnetic Recording (SMR) drives. The software part of Skylight measures the latency of controlled I/O operations to infer important properties of drive-managed SMR, including type, structure, and size of the persistent cache; type of cleaning algorithm; type of block mapping; and size of bands. The hardware part of Skylight tracks drive head movements during these tests, using a high-speed camera through an observation window drilled through the cover of the drive. These observations not only confirm inferences from measurements, but resolve ambiguities that arise from the use of latency measurements alone.We show the generality and efficacy of our techniques by running them on top of three emulated and two real SMR drives, discovering valuable performance-relevant details of the behavior of the real SMR drives.


f4: Facebook’s Warm BLOB Storage System

Satadru Pan, Software Engineer, Facebook

Abstract

Facebook’s corpus of photos, videos, and other Binary Large OBjects (BLOBs) that need to be reliably stored and quickly accessible is massive and continues to grow. As the footprint of BLOBs increases, storing them in our traditional storage system, Haystack, is becoming increasingly inefficient. To increase our storage efficiency, measured in the effective-replication-factor of BLOBs, we examine the underlying access patterns of BLOBs and identify temperature zones that include hot BLOBs that are accessed frequently and warm BLOBs that are accessed far less often. Our overall BLOB storage system is designed to isolate warm BLOBs and enable us to use a specialized warm BLOB storage system, f4. f4 is a new system that lowers the effective-replication-factor of warm BLOBs while remaining fault tolerant and able to support the lower throughput demands. f4 currently stores over 65PBs of logical BLOBs and reduces their effective-replication-factor from 3.6 to either 2.8 or 2.1. f4 provides low latency; is resilient to disk, host, rack, and datacenter failures; and provides sufficient throughput for warm BLOBs.

NFS

 

 

Introduction to Highly Available NFS Server on Scale-Out Storage Systems Based on GlusterFS

Soumya Koduri, Senior Software Engineer, Red Hat India
Meghana Madhusudhan, Software Engineer, Red Hat

Abstract

Many enterprises still heavily depend on NFS to access their data from different operating systems and applications. NFS-Ganesha is a user-space file server that supports NFSv3, NFSv4, NFSv4.1 as well as pNFS.

GlusterFS has now added NFS-Ganesha server to its NFS stack to eventually replace native Gluster-NFS server which supports only NFSv3. The integration with NFS-Ganesha now means additional protocol support w.r.t. NFSv4, better security and authentication mechanisms for enterprise use. The upcoming release of GlusterFS (3.7) introduces Clustered or multi-head active/active NFS support using Pacemaker and Corosync for better availability. There is also tighter integration with Gluster CLI to manage NFS-Ganesha exports. This presentation is aimed at providing a basic overview of the entire solution and step-by-step configuration.

Learning Objectives

  • Basic architecture walk-through of nfs-ganesha and what the integration with GlusterFS means.
  • Architecture overview of the multi-head active/active highly available NFS solution.
  • Step-by-step guide to configure NFS-Ganesha on GlusterFS using newly introduced CLI options.
  • Requirements and best practice recommendations for HA configuration for NFSGanesha with GlusterFS.

Instantly Finding a Needle of Data in a Haystack of Large-Scale NFS Environment

Gregory Touretsky, Product Manager, Infinidat

Abstract

Intel Design environment heavily depends on a large scale NFS infrastructure with 10s of PBs of data. Global Name space helps to navigate this large environment in a uniform way from 60,000 compute servers.

But what if a user doesn't know where the piece of data he is looking for is located?

Our customers used to spend hours waiting for recursive ""grep"" commands' completion - or preferred not to bother with some less critical queries.

In this talk, we'll cover how Intel IT has identified an opportunity to provide a faster way to look for an information within this large-scale NFS environment. We'll review various open source solutions which were considered, and how we've decided to implement a mix of home-grown scalable NFS crawler with open source ElasticSearch engine to index parts of our NFS environment.

As part of this talk we'll discuss various challenges and our ways to mitigate them, including:

  • crawler scalability required to index large amounts of dynamically changing data within pre-defined indexing SLA
  • Index scalability and performance requirements
  • Relevancy of the results presented in search queries by customers
  • User interface considerations
  • Security aspects of the index access control

This might be an interesting conversation for both storage vendors - covering a useful feature which might be implemented as a part of NFS environment, and for storage customers who may benefit from such capability.

Learning Objectives

  • How to implement scalable indexing and search on top of large scale NFS
  • Scalable crawling with controlled performance impact on shared file servers
  • Security aspects of data index and search representation

Scalable Metadata in NFSv4

Casey Bodley, Software Developer, CohortFS

Abstract

With NFS version 4.1, pNFS was introduced to provide clients with direct access to storage devices to reduce the bottleneck of a single NFS server.

The pNFS Metadata Striping draft applies pNFS scale-out to metadata, introducing cooperating pNFS MDS servers, striping of files and directory entries across multiple servers, a new lightweight redirection mechanism for OPEN, GETATTR, CREATE, and other mutating operations, and new parallel directory enumeration. pNFS Metastripe breaks the MDS bottleneck in pNFS, and gives NFS the ability to operate efficiently at larger scales and under more demanding metadata workloads.

Learning Objectives

  • Describe metadata scalability challenges in enterprise, HPC, and cloud
  • Introduce the pNFS Metastripe protocol, including changes since early proposal drafts by Eisler0
  • Present insights from CohortFS implementation of metastripe in Ganesha and Ceph
  • Review current state and progress of IETF draft

pNFS/RDMA: Possibilities

Chuck Lever, Linux Kernel Architect, Oracle

Abstract

The presenter will discuss possible ways to merge pNFS with persistent memory and fast storage fabrics.

Learning Objectives

  • Challenges of NFS with fast storage
  • What is a pNFS layout?
  • Does pNFS work on RDMA transports
  • Implications of pNFS/RDMA

NVMe FABRIC

 

 

Donard: NVM Express for Peer-2-Peer between SSDs and other PCIe Devices

Stephen Bates, Technical Director, PMC

Abstract

In this paper we extend previous work to include p2p transfers between NVMe devices and RDMA capable NICs running protocols like Infiniband, RoCE and iWARP.

We present experimental results using both 10Gbe iWARP and 56G Infiniband NICs that show how the latency associated with remote transfer of data can be reduced whilst also offloading the CPU allowing it to focus on other tasks.

We show how this work can act as a precursor for the NVMe over Fabrics work currently being standardized. We also show how the Controller Memory Buffer (CMB) feature introduced in NVMe 1.2 be utilized in a novel fashion to aid this work.

Learning Objectives

  • What are the benefits of NVM Express
  • How can NVM Express and RDMA be utilized prior to NVMe over Fabrics
  • How Donard code builds on open-source code
  • How latency, bandwidth and CPU offload can all be improved using peer-2-peer

NVM PROGRAMMING

 

 

SNIA Tutorial:
The NVM Revolution

Paul von Behren, Software Architect, Intel Corporation

Abstract

This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.

OBJECT DRIVES

 

 

SNIA Tutorial:
Object Drives: A New Architectural Partitioning

Mark Carlson, Principal Engineer, Industry Standards, Toshiba

Abstract

A number of scale out storage solutions, as part of open source and other projects, are architected to scale out by incrementally adding and removing storage nodes. Example projects include:

  • Hadoop’s HDFS
  • CEPH
  • Swift (OpenStack object storage)

The typical storage node architecture includes inexpensive enclosures with IP networking, CPU, Memory and Direct Attached Storage (DAS). While inexpensive to deploy, these solutions become harder to manage over time. Power and space requirements of Data Centers are difficult to meet with this type of solution. Object Drives further partition these object systems allowing storage to scale up and down by single drive increments.

This talk will discuss the current state and future prospects for object drives. Use cases and requirements will be examined and best practices will be described.

Learning Objectives

  • What are object drives?
  • What value do they provide?
  • Where are they best deployed?

Beyond LBA: New Directions in the Storage Interface

Abhijeet Gole, Senior Director of Engineering, Toshiba

Abstract

New storage devices are emerging that go beyond the traditional block interface and support key value protocol interfaces. In addition some of the emerging devices include capabilities to run applications on the device itself. This talk will explore the paradigm shift introduced by these new interfaces and modes of operation of storage devices.

Learning Objectives

  • The developer will learn the key technical details of the OpenKinetic protocol as an example
  • Learn how key value protocols map to the semantics that modern days developers are familiar with
  • Learn how your software can be modified to take advantage of these new interfaces and the impact of running storage applications on the devices
  • Leave with an in depth understanding of the new interfaces and how they apply to your project
  • Leave with an understanding of the current adoption by customers and other developers

PERFORMANCE

 

 

Designing SSD-Friendly Applications

Zhenyun Zhuang, Senior Performance Engineer, LinkedIn

Abstract

SSD is being increasingly adopted for improved application performance. SSD works quite differently from its HDD counterpart. Hence, many conventional applications that are designed and optimized for HDD may not fit well to SSD characteristics. In particular, developers typically know little about SSD and simply treat SSD as a "faster" HDD. In this talk, we will present a set of guidelines of how to design SSD-friendly applications which not only maximize the application performance, but maximize the SSD life.

Learning Objectives

  • Developers can gain thorough understanding of SSD performance
  • Developers are able to design improved applications, data structures and algorithms to maximize performance with SSD

Load-Sto-Meter: Generating Workloads for Persistent Memory

Doug Voigt, Distinguished Technologist, HP
Damini Ashok Chopra, Software Intern - Office of Chief Technologist, HP

Abstract

New persistent memory technologies allow IO to be replaced with memory mapped files where the primary operations are load, store, flush and fence instructions executed by CPU’s. This creates a new need for software generate well understood workloads made up of those operations to characterize implementations of persistent memory related functionality. This session describes a proposal for such a workload generator which could play a role for PM solutions similar to IOMeter for IO.

Learning Objectives

  • Learn how pure load/store workloads can be generated
  • Learn about parameters that should govern the creation of pure load/store workloads

Application-Level Benchmarking with SPEC SFS 2014

Nick Principe, Senior Software Engineer, EMC
Vernon Miller, Performance Engineer, IBM

Abstract

A technical deep-dive into the four SPEC SFS 2014 workloads at different levels of the storage stack, and how client performance and configuration can affect benchmark results. Multiple storage protocols will be addressed, including NFS, SMB, and FC – yes, you CAN test a block storage array with a file-level benchmark!

Learning Objectives

  • Deep analysis of storage solution performance
  • Effects of protocol and operating system on storage performance measurement
  • Observe workload changes as traffic flows from client through storage to disk

Online Cache Analysis And Its Applications For Enterprise Storage Systems

Irfan Ahmad, CTO, CloudPhysics

Abstract

It is well-known that storage cache performance is non-linear in cache size and the benefit of caches varies widely by workload. This means that no two real workload mixes have the same cache behavior! Existing techniques for profiling workloads don’t measure data reuse, nor does they predict changes in performance as cache allocations are varied. Since caches are a scarce resource, workload-aware cache behavior profiling is highly valuable with many applications.

We will describe how to make storage cache analysis efficient enough to be able to put directly into a commercial cache controller. Based on work published at FAST '15, we'll show results including computing miss ratio curves (MRCs) on-line in a high-performance manner (~20 million IO/s on a single core).

The technique enables a large number of use cases in all storage device. These include visibility into cache performance curves for sizing the cache to actual customer workloads, troubleshooting field performance problems, online selection of cache parameters including cache block size and read-ahead strategy to tune the array to actual customer workloads, and dynamic MRC-guided cache partitioning which improve cache hit ratios without adding hardware. Furthermore, the work applies to all types of application caches not just those in enterprise storage systems.

Learning Objectives

  • Storage cache performance is non-linear
  • Benefit of caches varies widely by workload mix
  • Working set size estimates don't work for caching
  • How to make storage cache analysis available in a commercial cache controller
  • New use cases for cache analysis in enterprise storage systems

SNIA Tutorial:
NVDIMM-SSDs Tested to the SNIA SSD Performance Test Specification

Eden Kim, CEO, Calypso Systems

Abstract

NVDIMM-N modules are being introduced as a fast tier storage option in the Memory Channel. Learn how to evaluate and test NVDIMM-N modules to the SNIA Solid State Performance Specification Enterprise Version 1.1 (PTS-E v1.1) using the Intel open source block IO driver for Linux. Find out what test settings and variables affect NVDIMM-N performance and how to apply the SNIA PTS-E to NVDIMM block IO performance testing.


Storage Performance Analysis for Big Data Processing

Da Qi Ren, Staff Research Engineer, Huawei Technologies
Zane Wei, Director, Huawei Technologies

Abstract

End to end big data benchmarking has become an extreme attention of ICT industry, the related techniques are being investigated by numerous hardware and software vendors. Storages, as one of the core components of a data center system, need specially designed approaches to measure, evaluate and analyze their performance. This talk introduces our methods to create the storage performance model based on workload characterization, algorithm level behavior tracing and capture, and software platform management. The functionality and capability of our methodology for quantitative analysis of big data storage have been validated through benchmarks and measurements performed on real data center system.

Learning Objectives

  • Storage Performance Measurement and Evaluation
  • Scalable and Distributed Storage Systems
  • Best practices architecture for Big Data
  • Performance analysis

PERSISTENT MEMORY

 

 

Preparing Applications for Persistent Memory

Doug Voigt, Distinguished Technologist, HP

Abstract

New persistent memory technologies promise to revolutionize the way applications store data. Many aspects of application data access will need to be revisited in order to get full advantage of these technologies. The journey will involve several new types of libraries and ultimately programming language changes. In this session we will use the concepts of the SNIA NVM Programming Model to explore the emerging landscape of persistent memory related software from an application evolution point of view.

Learning Objectives

  • Learn how applications can navigate the transitions created by persistent memory
  • Learn how additional software and services can help.

Managing the Next Generation Memory Subsystem

Paul von Behren, Software Architect, Intel

Abstract

New memory technologies are emerging which bring substantial performance and reliability benefits, but these benefits are only achieved with careful provisioning and on-going management of the memory subsystem. Non-volatile memory technologies in particular have unique characteristics that require rethinking memory and storage management. This talk begins with an overview of emerging memory device types with a focus on non-volatile DIMMs. We’ll cover the management concepts and features of these new technologies and put them in the context of overall memory subsystem and server management. The talk concludes with an overview of SNIA, DMTF and other standards that are being introduced to drive interoperability and encourage the development of memory subsystem management tools.

Learning Objectives

  • Discuss the features of emerging memory technologies and the system management challenges that result from these new features
  • Discover the concepts, practices and tools that administrators can use to discover, provision, and manage the growing complexity of the memory subsystem
  • Review the standardization efforts, documentation, and open source code available to developers looking to get started with memory management development projects

SNIA Tutorial:
The NVDIMM Cookbook: A Soup-to-Nuts Primer on Using NVDIMMs to Improve Your Storage Performance

Jeff Chang, VP Marketing and Business Development, AgigA Tech
Arthur Sainio, Senior Director Marketing, Smart Modular

Abstract

Non-Volatile DIMMs, or NVDIMMs, have emerged as a go-to technology for boosting performance for next generation storage platforms. The standardization efforts around NVDIMMs have paved the way to simple, plug-n-play adoption. If you're a storage developer who hasn't yet realized the benefits of NVDIMMs in your products, then this session is for you! We will walk you through a soup-to-nuts description of integrating NVDIMMs into your system, from hardware to BIOS to application software. We'll highlight some of the "knobs" to turn to optimize use in your application as well as some of the "gotchas" encountered along the way.

Learning Objectives

  • Understand what an NVDIMM is
  • Understand why an NVDIMM can improve your system performance
  • Understand how to integrate an NVDIMM into your system

Remote Access to Ultra-low-latency Storage

Tom Talpey, Architect, Microsoft

Abstract

A new class of ultra-low latency storage is emerging, including Persistent Memory (PM), as well as advanced nonvolatile storage technologies such as NVMe. The SNIA NVM TWG has been exploring these technologies and has more recently prepared a white paper for requirements of remotely utilizing such devices. Remote Direct Memory Access (RDMA), arbitrated by file and block storage protocols, is a clear choice for this access, but existing RDMA and storage protocol implementations incur latency overheads which impact the performance of the solution. And while raw fabric block protocols can address latency overheads, they do not address data integrity, management and sharing.

This talk explores the issues, and outlines a path-finding effort to make small, natural extensions to RDMA and upper layer storage protocols to reduce these latencies to acceptable, minimal levels, while preserving the many advantages of the storage protocols they extend.

Learning Objectives

  • Learn key technologies enabling remote access to new storage media, such as NVM and PM
  • Understand the issues in making full use of PM technologies remotely, with today’s protocols
  • Explore a path to fully access the benefits of remote access to PM devices in the future

Solving the Challenges of Persistent Memory Programming

Sarah Jelinek, Senior SW Engineer, Intel

Abstract

Programming with persistent memory is hard, similar to the type of programming a file system developer does because of the need to write changes out in a way that maintains consistency. Applications must be re-architected to change data stored in two tiers (DRAM and storage) into three tiers (DRAM, pmem and storage). This presentation will review key attributes of persistent memory as well as outline architectural and design considerations for making an application persistent memory aware. This discussion will conclude with examples showing how to modify an application to provide consistency when using persistent memory.

Learning Objectives

  • Introduce how persistent memory differs from DRAM and standard storage for storing application data
  • Show examples of the architectural considerations for making an application persistent memory aware
  • Give examples of how to modify an existing application to utilize persistent memory
  • Discuss the open source Non-Volatile Memory Library (NVML) available on GitHub for use to help with persistent memory programming

RDMA with PM: Software Mechanisms for Enabling Persistent Memory Replication

Chet Douglas, Principal SW Architect, Intel

Abstract

With the emergence of persistent memory, the need to replicate data across multiple clusters arises. RDMA to persistent memory provides a mechanism to replicate data remotely but requires SW to implicitly make previously written data persistent. This presentation will review key HW components involved in RDMA and introduce several SW mechanisms that can be utilized with RDMA with PM. The discussion will conclude with a review of performance implications of each solution and methods that can be utilized to model the latencies associated with RDMA and PM.

Learning Objectives

  • Introduce HW Architecture concepts of Intel platforms that will affect RDMA usages with PM
  • Introduce SW Mechanisms that can be utilized in RDMA Application SW to make RDMA Write data persistent
  • Review detailed sequences, platform performance implications, and Pros and Cons for each proposed SW Mechanism
  • Intel platform configuration details for using RDMA with PM
  • Overview of Intel’s SW Application for evaluating RDMA performance and SW modeling of new Intel CPU instructions for use with PM

Advances in Non-Volatile Storage Technologies

Thomas Coughlin, President, Coughlin Associates
Edward Grochowski, Storage Consultant, Self Employed

Abstract

Today, HDD areal densities are nearing 1 Terabit per sq. in. and Flash memories are applying lithographic exposures much smaller than 28 nm, or are advancing to 3D structures. These products require new and demanding process techniques to maintain storage market growth and cost competitiveness. Alternative new non volatile memories and storage technologies as STT RAM, RRAM, PCM and several others are becoming more attractive to meet this growing demand for storage and memory bytes. This study will address the status of NVM device technologies and review requirements in process, equipment and innovations. Progress in implementing these devices as well as future concerns to achieve economic implementation will be outlined. The dependency on CMOS driver devices for NVM will be discussed to attain a high density memory or storage alternative. A concluding assessment in implementation of NVM will be made in the context of HDD and Flash memories.


Understanding the Intel/Micron 3D XPoint Memory

Jim Handy, General Director, Objective Analysis

Abstract

Intel and Micron recently introduced their 3D XPoint (pronounced "crosspoint") memory technology, a new development that can serve as both system memory (RAM) and nonvolatile storage. The new technology is up to 1,000 times faster than NAND flash with 1,000 times greater endurance and is 10 times as dense as DRAM. It is said to be in production today with samples this year and shipping products starting next year.


Nonvolatile Memory (NVM), Four Trends in the Modern Data Center, and the Implications for the Design of Next Generation Distributed Storage Platforms

David Cohen, System Architect, Intel
Brian Hausauer, Hardware Architect, Intel

Abstract

There are four trends unfolding simultaneously in the modern Data Center: (i) Increasing Performance of Network Bandwidth, (ii) Storage Media approaching the performance of DRAM, (iii) OSVs optimizing the code path of their storage stacks, and (iv) single processor/core performance remains roughly flat. A direct result of these trends is that application/workloads and the storage resources they consume are increasingly distributed and virtualized. This, in turn, is making Onload/Offload and RDMA capabilities a required feature/function of distributed storage platforms. In this talk we will discuss these trends and their implications on the design of distributed storage platforms.

Learning Objectives

  • Highlight the four trends unfolding in the data center
  • Elaborate on the implication of these trends on design of modern distributed storage platforms
  • Provide details on how onload/offload mechanisms and RDMA become feature/function requirements for these platforms in the near-future

Developing Software for Persistent Memory

Dr. Thomas Willhalm, Senior Application Engineer, Intel
Karthik Kumar, Senior Application Engineer, Intel

Abstract

NVDIMMs provide applications the ability to access in-memory data that will survive reboots: this is a huge paradigm shift happening in the industry. Intel has announced new instructions to support persistence. In this presentation, we educate developers on how to take advantage of this new kind of persistent memory tier. Using simple practical examples [1] [2], we discuss how to identify which data structures that are suited for this new memory tier, and which data structures are not. We provide developers a systematic methodology to identify how their applications can be architected to take advantage of persistence in the memory tier. Furthermore, we will provide basic programming examples for persistent memory and present common pitfalls.

Learning Objectives

  • NVDIMMs have the potential to be a game changer for applications, as they offer the ability to access “in-memory data” that will survive reboots.
  • In this presentation, we educate developers on how to take advantage of this new kind of persistent memory tier.
  • Furthermore, we will provide basic programming examples for persistent memory and present common pitfalls.

Building NVRAM Subsystems in All-Flash Storage Arrays

Pete Kirkpatrick, Principal Engineer, Pure Storage

Abstract

The emergence of All-Flash Storage Arrays is transforming the storage industry. These arrays require new subsystems to provide consistently low latency for both reads and persisted writes. NVRAM solutions range from SLC NAND Flash to NVDIMMs, and in the future, more exotic solutions. In this talk, we discuss the hardware and software development of an NVDIMM using NVMe over PCIe-based NVRAM solution and compare the performance of the NVMe-based solution to an SLC NAND Flash-based solution. Finally, we provide a survey of other future NVRAM solutions and how they would impact the system hardware and software development.

Learning Objectives

  • Hardware development of an NVMe over PCIe NVRAM solution
  • Software development of an NVMe over PCIe NVRAM solution
  • Performance comparison of SLC NAND flash-based NVRAM vs. NVMe NVDIMM based solution

PROTOCOLS

 

 

Using iSCSI or iSER?

Ásgeir Eiriksson, Chief Technology Officer, Chelsio Communications

Abstract

This talk will demystify the relationship and relative performance and capabilities of iSCSI and iSER. The talk provides an introduction to iSER and its position in an iSCSI environment, and presents performance results to compare the two protocols when both are processed in hardware within an HBA. The talk concludes with a set of recommendations on deploying iSCSI and iSER within storage networks.

Learning Objectives

  • Develop an understanding of iSCSI and iSER protocol stacks with comparison of capabilities for each
  • Learn to recognize the performance benefits and benchmark results
  • Have a clear understanding of when to use iSER

 

 

Linux SMB3 and pNFS - Shaping the Future of Network File Systems

Steven French, Principal System Engineer, Samba team/Primary Data

Abstract

Network File Systems, needed for accessing everything from low end storage, to Windows and Mac servers, to high end NAS, continue to evolve. NFS and SMB, the two dominant network storage protocols, also continue to improve with exciting new features in their most recent dialects. And the Linux clients continue to improve their implementation of these protocols, recently adding security and performance enhancements for SMB3 and new pNFS layout types along with the NFSv4.2 support in the NFS client.

This presentation will discuss some of the recent changes in network file system support in Linux including enhanced CIFS/SMB2/SMB3 support in the kernel client, and also new developments in the NFS client. It will also discuss in progress work on new protocol features for improved performance, clustering scalability, reliability and availability. It will also compare and contrast some of the key features of the SMB3 and NFS Linux clients.

Learning Objectives

  • Understanding key features and limitations of the SMB3 support in Linux client
  • Understanding key features and limitations of the NFS client in Linux client
  • Understanding which protocol (and dialect) is better for common use cases
  • Understanding key differences between NFSv4.2 and SMB3.1
  • Understanding common SMB3 client configuration choices and why they are useful

 

 

Move Objects to LTFS Tape Using HTTP Web Service Interface

Matt Starr, Chief Technical Officer, Spectra Logic
Jeff Braunstein, Developer Evangelist, Spectra Logic

Abstract

Tape has always been a reliable, low-cost, green medium for long-term storage needs. However, moving objects to tape has sometimes been challenging and expensive. The DS3 protocol, which is an extension of the S3 protocol popularized by Amazon, provides easy storage to tape through HTTP web services. Additionally, DS3 uses the open Linear Tape File System (LTFS) format to store the objects on tape, making the data readable by many applications. With DS3, developers can easily create applications that move data to tape.

Learning Objectives

  • Understand the DS3 HTTP protocol and how it can be used to move data to tape.
  • Learn the difference between DS3 and S3.
  • Review the software development kits (SDK) available to simplify DS3 application development.
  • See the sample client applications that have been built using the DS3 SDKs.
  • Understand the different components needed for DS3 application development.

SDS - SOFTWARE DEFINED STORAGE

 

 

 

 

Introduction to CoprHD: An Open Source Software Defined Storage controller - deep dive for developers

Anjaneya Chagam, Principal Engineer, Intel
Urayoan Irizarry, Consultant Software Engineer, EMC

Abstract

CoprHD is an open source software defined storage controller based on EMC's ViPR Controller. Software Defined Storage (SDS) has significant impact on how companies deploy and manage public and private cloud storage solutions to deliver on-demand storage services while reducing the cost. Similar to Software Defined Networking (SDN), SDS promises to simplify management of diverse provider solutions and ease of use. CoprHD open source SDS controller centralizes management and automation of multi-vendor storage systems to deliver on-demand policy driven storage services.

This presentation will cover CoprHD controller overview, architecture, driver and plug-in development that will help in jump starting your community development.

Learning Objectives

  • Learn Software Defined Storage framework for managing cloud wide storage services
  • Understand how to get engaged in CoprHD community for code contributions
  • Understand CoprHD controller internals and integration with Orchestration frameworks (e.g., OpenStack, VMWare)
  • Gain in-depth exposure to tools and techniques to write custom plug-ins using REST APIs
  • Learn how to write drivers that will expose storage system advanced features using driver interfaces and controller extensions

 

 

Software Defined Storage Based on Direct Attached Storage

Slava Kuznetsov, Principal Software Engineer, Microsoft

Abstract

Software defined storage solutions can (and should) be based on industry-standard hardware! This talk will cover the technical architecture of the solution from the lead developer’s viewpoint, with design decisions explained, and optimization evaluated. We will also enumerate the wire protocol. We will demonstrate the end-to-end solution, scale and performance.

SECURITY

 

 

 

 

Hackers, Attack Anatomy and Security Trends

Geoff Gentry, Regional Director, Independent Security Evaluators

Abstract

Practical experience from implementing the OASIS Key Management Interoperability Protocol (KMIP) and from deploying and interoperability testing multiple vendor implementations of KMIP form the bulk of the material covered. Guidance will be provided that covers the key issues to require that your vendors address and how to distinguish between simple vendor tick-box approaches to standard conformance and actual interoperable solutions.

Learning Objectives

  • In-depth knowledge of the core of the OASIS KMIP
  • Awareness of requirements for practical interoperability
  • Guidance on important of conformance testing

Mobile and Secure: Cloud Encrypted Objects Using CDMI

David Slik, Technical Director, Object Storage, NetApp

Abstract

Data wants to live in the cloud, and move freely between enterprises, phones, homes and clouds, but one major obstacle remains: How can your data be protected against alteration and disclosure? This session introduces the Cloud Encrypted Object Extension to the CDMI standard, which permits encrypted objects to be stored, retrieved, and transferred between clouds. Originating out of work to make CDMI usable for Electric Medical Records (EMR) application, Cloud Encrypted Objects are a standards-based way to encrypt data, verify integrity, and provide access to secured content, such that objects can freely move between clouds in a cross-protocol manner.

Learning Objectives

  • Learn how Cloud Encrypted Objects are used by a client
  • Learn how Cloud Encrypted Objects can move between clouds
  • Learn about access control and delegation for Cloud Encrypted Objects
  • Learn how Cloud Encrypted Objects can be stored and accessed from file systems, CDMI, S3, Swift and other repositories

OpenStack Swift On File: User Identity For Cross Protocol Access Demystified

Dean Hildebrand, IBM Master Inventor and Manager | Cloud Storage Software, IBM
Sasikanth Eda, Software Engineer, IBM

Abstract

Swift on File enables the swift object store hosted over clustered file system to have file as well as object access for the same data. Such multi protocol access enables various use-cases where data can be ingested via object and processed for analytics over file protocols (SMB/NFS/POSIX). In another manifestation, data can be accessed or shared by the user interchangeable via different protocols enabling user data sync n share across protocols.

For some of these use-cases, there is a strong need to have common User Identity management across object and file protocols so that one can leverage the underlying common file system features like quota management per user or group, per user/group placement policies on data or even have common authorization across file and object . In order to achieve this, the approaches need to ensure that objects created by an user via Swift is associated with the user's user ID (UID) and group ID (GID) which is same when the object is accessed by that user via file protocols like NFS/SMB/POSIX (where typically the ID's are stored in a centrally ID mapping server like Microsoft AD or LDAP).

The proposed presentation discusses in detail the various issues and nuances associated with having common ID management across Swift object access and file access and presents an approach to solve them without changes in core Swift code by leveraging powerful SWIFT middleware framework.

Learning Objectives

  • Understanding OpenStack SWIFT Object Store and File access challenges
  • Learning about OpenStack middleware which will help developers on its usage
  • Learning ID management concepts for File and Object
  • Learning the different approaches to solve SWIFT on File ID management problem
  • Learning the algorithm that developers can deploy to have common ID management between object (SWIFT) and FILE

 

 

Multi-Vendor Key Management with KMIP

Tim Hudson, CTO and Technical Director, Cryptsoft

Abstract

Practical experience from implementing the OASIS Key Management Interoperability Protocol (KMIP) and from deploying and interoperability testing multiple vendor implementations of KMIP form the bulk of the material covered. Guidance will be provided that covers the key issues to require that your vendors address and how to distinguish between simple vendor tick-box approaches to standard conformance and actual interoperable solutions.

Learning Objectives

  • In-depth knowledge of the core of the OASIS KMIP
  • Awareness of requirements for practical interoperability
  • Guidance on important of conformance testing

 

 

Network Bound Encryption for Data-at-Rest Protection

Nathaniel McCallum, Senior Software Engineer, Red Hat

Abstract

Setting up a system to store sensitive data is the easy part. Protecting that data from prying eyes is much harder. Warranty repair? Retiring old disks? Sure, you can store your data on encrypted disks. But now you get to manage all the disk encryption keys, creating a high-risk target for active attackers.

In this talk we will introduce Deo, an open source project which implements a new technique for binding encryption keys to a network. This technique provides secure decentralized storage and management of decryption keys so that disk encryption can become entirely transparent and automatic.

Learning Objectives

  • Outline a disk's full life-cycle
  • Identify data vulnerability points
  • Demonstrate how to use encryption to protect data-at-rest
  • Survey encryption key management
  • Use asymmetric cryptography to reduce management complexities Objective5

SMB

 

 

SMB 3.1.1 Update

Greg Kramer, Principal Software Engineer, Microsoft
Dan Lovinger, Principal Software Engineer, Microsoft

Abstract

The SMB3 ecosystem continues to grow with the introduction of new clients and server products, a growing deployment base, and new generations of networking technologies. This talk covers the changes to the SMB3 protocol in Windows 10 and Windows Server 2016, the design considerations, and how the changes will affect both protocol implementers and customers. The challenges and performance of multi-vendor, switched dual NIC 100Gb RDMA will be presented for the first time.


Samba and SMB3: Are We There Yet?

Ira Cooper, Principal Software Engineer, Red Hat

Abstract

Like passengers on a long car ride, the one question on everyone's mind regarding Samba and SMB3 is, "Are we there yet?"

This talk will take you on a tour of how Samba will go from its current nominal support of SMB3 to more comprehensive support of SMB3. You will be given an overview of Samba's architecture, design, and the implementation status of key SMB3 features including Witness, Multichannel, SMB Direct, and Persistent Handles.

By the end, you will know exactly where we are and how far we have to go.

Learning Objectives

  • Current status of SMB3 support in Samba
  • Architecture and design of SMB3 features in Samba
  • Challenges faced during the implementation of SMB3 in Samba so far
  • The roadmap for SMB3 support in Samba going forward

Tuning an SMB Server Implementation

Mark Rabinovich, Rand D Manager, Visuality Systems

Abstract

Server platforms differ from a low-end NAS solution to a high-end storage. All of them require an SMB server solution, although. Such a solution, to serve all of them, must be highly customizable. We will discuss the methods of SMB Server parameterization to meet the wide range of requirements. This discussion will emphasize on both instrumentation and performance figures. A special topic will be dedicated to measuring SMB performance over RDMA.

Learning Objectives

  • SMB3 server implementation
  • SMB scalability
  • SMB performance
  • SMB over RDMA

Azure File Service: ‘Net Use’ the Cloud

David Goebel, Software Engineer, Microsoft

Abstract

Microsoft Azure has provided REST endpoints for blobs, tables, and queues since its inception. This is an efficient and simple stateless storage API for new applications. However, there is a very large installed base of mature applications, especially enterprise and vertical, which are written to a conventional file API such as Win32 or the C run-times. Azure File Service provides [MS-SMB2] compliant file shares with the same high availability as Azure’s REST endpoints since the backing store for both transient handle state and files data is, under the hood, Azure tables and blobs. As a bonus, the file share namespace is also exposed via REST, allowing simultaneous and coherent access to file data from both endpoints. This talk will relate the experience and challenges of designing and implementing a wire compliant continuously available SMB server where the backing store is not even a conventional file system, let alone NTFS.

Learning Objectives

  • Learn how an SMB sever can be built on top of something other than a conventional file system.
  • Gain an appreciation of the complexities involved in durably committing what is usually considered volatile handle state which must be both highly available and high performance.
  • Be inspired by the possibilities of immediately running existing applications unmodified against the cloud while simultaneously leveraging REST access to the application’s data.

SMB3 Multi-Channel in Samba

Michael Adam, Principal Software Engineer, Red Hat

Abstract

The implementation of advanced SMB3 features is a broad and important set of topics on the Samba roadmap. One of these SMB3 features that is currently being actively worked on is Multi-Channel, a kind of channel bonding at the SMB level intended to increase both performance and fault-tolerance of SMB sessions. It is not only one of the most generally useful features of SMB3 but also a prerequisite for enabling RDMA as a transport for SMB with SMB Direct.

This talk will provide details about the current project to finish the implementation of SMB3 Multi-Channel in Samba, explaining the challenges for development and how they are solved. The presentation will include demos. The talk will conclude with a brief outlook how SMB Direct support can be added to Samba.

Learning Objectives

  • Refresher on Multi-Channel
  • State of implementation of Multi-Channel in Samba
  • Challenges for Samba to implement Multi-Channel
  • Design of Multi-Channel in Samba
  • Outlook to SMB Direct support

The Past, Present and Future of Samba Messaging

Volker Lendecke, Developer, Samba Team / SerNet

Abstract

Samba components have to talk to each other. One of the original requirements for messaging are oplock breaks: One smbd has to tell another smbd to give up an oplock. This used to be done via local UDP packets until it was converted to use a general, tdb-based messaging API. The Samba4 effort by Tridge implemented messaging on top of local unix domain datagram sockets. Samba 4.2 has a new implementation of this concept.

Meanwhile ctdb provides clusterwide messaging, using a central daemon per cluster node which is aware of the cluster configuration.

The talk will describe the various implementations in detail, their strengths and weaknesses. It will also describe possible future developments for high-performance local and clusterwide messaging. It will give Samba implementors an overview of a critical piece of the Samba architecture and where it is headed.

Learning Objectives

  • Learn about Samba architecture
  • Find out about Samba clustering directions
  • Get insight about Samba performance and scalability improvements

Calling the Witness: SMB3 Failover with Samba/CTDB

Günther Deschner, Developer, RedHat / Samba Team
José Rivera, Software Engineer, Red Hat / Samba Team

Abstract

An SMB3 File Server is not complete without the Witness service. Samba is currently developing support for this DCE/RPC service that allows for Continuous Availability (CA), a much more robust, fine-grained and seamless mechanism for client failover in clustered environments. This talk will outline the current implementation within Samba, the relationship with CTDB, challenges faced in development, and the planned integration with other projects like the CIFS kernel client and Pacemaker. This talk will also include a live demonstration showcasing the Witness infrastructure, it's role in CA, and how it can be controlled from a remote application.

Learning Objectives

  • How Witness works and what it does
  • How to achieve a seamless file sharing experience for SMB3 clients
  • How to deal programmatically with failover in clustered Samba environments

SMB 3.0 Transparent Failover for EMC Isilon OneFS

John Gemignani, Senior Consultant, Isilon Storage Division, EMC

Abstract

EMC Isilon OneFS operating system powers a file system that scales to more than twenty petabytes of data in a single namespace. Transparent failover capabilities of SMB 3.0 are very attractive to provide continuous. non-disruptive availability of this data to the users. However as one can imagine, there are many challenges to build this capability into the scale out architecture of this magnitude. We want to share the approach we took, and challenges we overcame in the process.

Learning Objectives

  • Fundamentals of SMB 3.0 failover
  • Configuration options to fit workloads running non-server application data
  • Isilon implementation of SMB 3.0 failover, and challenges overcome"

The Future is Cloudy - Samba Gateways to a Cloud Storage World

Jeremy Allison, Engineer, Google Samba Team

Abstract

Samba is becoming the product of choice to gateway local SMB file-based access to cloud storage. This talk will cover how this can be achieved, and the potential problems, pitfalls and solutions in designing such a product. I will present a design for architecting such a solution inside Samba.

SMR - SHINGLED MAGNETIC RECORDING

 

 

SMR – The Next Generation of Storage Technology

Jorge Campello, Director of Systems - Architecture and Solutions, HGST

Abstract

Shingled Magnetic Recording (SMR) is the next generation storage technology for continued improvement in HDD areal density, and offers new opportunities for open compute environments. In massive, scale-out cold storage applications such as active archive, social media and long-term data storage, SMR HDD-based solutions offers the highest density, lowest TCO and leading $/TB.

This speaking session will clearly articulate the difference in SMR drive architectures and performance characteristics, and will illustrate how the open source community has the distinct advantage of integrating a host-managed platform that leverages SMR HDDs. Further, HGST will discuss how SMR presents the possibility for unprecedented storage capacities, maintains a familiar form factor, and creates a lower-power envelope so architects can create responsive cold storage data pools that can be accessed in near real-time.

Learning Objectives

  • Demonstrate how leveraging a SMR HDD provides advantages to a host-managed platform
  • Show how SMR is enabling cold storage data pools on disk, by vastly increasing the amount of archive data that can be actively accessed
  • Provide an insight into the future of SMR – what does this mean for next steps of the data center

Host Managed SMR

Albert Chen Engineering Program Director, WDC
Jim Malina, Technologist, WDC

Abstract

Any problem in computer science can be solved with another layer of indirection. Shingle magnetic recording is no different – the only “difficulty” is to determine where to add the additional layer of indirection/abstraction to enable maximum flexibility and efficiency. Let's go over the various SW/FW paradigms that attempt to abstract away SMR behavior (e.g. user space library, device mapper, SMR aware file system, enlightened application). Along the way, we will also explore what deficiencies are holding back SMR adoption in (e.g. ATA sense data reporting) the data center.

Learning Objectives

  • Host managed SMR support in Linux
  • Linux host managed SMR simulator
  • Linux host managed SMR device mapper
  • What deficiencies are holding back SMR adoption in (e.g. ATA sense data reporting) the data center
  • Various SW/FW paradigms that attempts to abstract away SMR behavior

SNIA Tutorial:
FS Design Around SMR: Seagate’s Journey and Reference System with EXT4

Adrian Palmer, Drive Development Engineering, Seagate Technologies

Abstract

SMR is a game changer drive technology, embraced by all major manufacturers. SMR changes fundamental assumptions of file system management. This long-help abandonment of Random-Writes now makes drives behave as sequential-access tape.

Seagate is leading the way in providing a standards compliant IO stack for use with the new drives. Using the new ZAC/ZBC commands to make and maintain a file system is essential for performant operation. Seagate is sharing lessons learned from modifying EXT4 for use with SMR. This effort is called the SMR Friendly File System (SMRFFS).

Learning Objectives

  • Forward-write only considerations for the block allocation scheme
  • Zones/BlockGroup/AllocationGroup alignment and use Superblock, and other required write-in-place management schemes

An SMR-Aware Append-Only File System

Stephen Morgan, Senior Staff Research Engineer, Huawei
Chi-Young Ku, Huawei

Abstract

The advent of shingled magnetic recording (SMR) is bringing significant changes to modern file system design. Update-in-place data structures are no longer practicable; log structuring is becoming the de facto game. We report on simulations of a new SMR aware file system for append-only or circular write-only environments that merges log-structured design with traditional journaling to the advantage of both techniques. For example, sequential read performance should be better with SAFS than with a pure LFS because with SAFS, compaction moves blocks of data to contiguous zones. And, like a pure LFS, write performance should be high because writes are converted to appends to a single zone at a time. In this talk, we discuss the effects that SMR is having on basic file system design, how we arrived at our hybrid design, simulations of the design, and results we’ve obtained to date, especially a comparison of the performance of a simulation of SAFS, a traditional journaling file system, and an LFS, all under Linux.

Learning Objectives

  • SMR Disk Technology Overview and its Effect on File Systems
  • SMR Disk Technology and Log-structuring File Systems
  • SMR Disk Technology and Append-only File Systems

Strategies for Using Standard File Systems on SMR Drives

Hannes Reinecke, Senior Engineer, SUSE

Abstract

SMR (shingled Media Recording) drives are posed to become the de facto standard for high-density disk drives. The technology behind these drives sets new challenges to existing storage stacks by introducing new concepts like strict sequential write ordering, zone management etc.

While the ultimate goal for SMR drives is to use a file system natively, currently none of the standard file systems can run without modifications.

In this presentations I'll be outlining a zone-based caching strategy and a remapping strategy for SMR drives. I will be presenting the advantages and disadvantages for each of these, and will be presenting a sample implementation under Linux.

Additionally I'll be presenting the results for running unmodified btrfs, xfs, and ext4 file systems using both of these strategies.

Learning Objectives

  • Problems for file systems on SMR drives
  • Strategies for using file systems on SMR drives
  • Advantages and disadvantages for each of these strategies

Implement Object Storage with SMR based Key-Value Store

Qingchao Luo, Massive Storage Chief Architect, Huawei

Abstract

Object storage technology is suitable for cloud storage and cold storage market, because of its simplicity and scalability. Most of the workload on object storage is write once with few modification, then read a lot. So HDD SMR technology can be leveraged in object storage, since it provide cost efficient media but need sequential write without write in place requirement. This presentation introduces how to design log structured Key value Store based on SMR HDD, then build competitive Object storage.


Integrating Cooperative Flash Management with SMR Technology for Optimized Tiering in Hybrid Systems

Alan Chen, Principal Software Architect, Radian Memory Systems

Abstract

Integrating Cooperative Flash Management (CFM) with SMR drives can achieve unprecedented efficiencies for data tiering in hybrid systems. As an alternative to Flash-Translation-Layers (FTLs) found in conventional SSDs, Cooperative Flash Management (CFM) can provide dramatic improvements in latency, IOPS, bandwidth, and endurance, including an order of magnitude advantage in Quality-of-Service, the most critical metric for Flash storage applications. CFM enables optimizing garbage collection and segment cleaning policies at the system level, opening up a new design space for data center applications. Because CFM and SMR host-managed drives are based upon a similar premise, concepts from the ZBC standard can map to either technology, and be utilized to integrate the two technologies to extend this new system design space into highly optimized data tiering.

Learning Objectives

  • Understanding Cooperative Flash Management: what is it, advantages and limitations
  • Why Flash FTL strategies don't readily apply to SMR drives
  • Mapping ZBC concepts to Flash
  • Leveraging host-managed SMR efforts into Cooperative Flash Management
  • System design for integrating SMR and Cooperative Flash Management

SOLID STATE STORAGE

 

 

 

 

Standardizing Storage Intelligence and the Performance and Endurance Enhancements it Provides

Bill Martin, Principal Engineer Storage Standards, Samsung
Changho Choi, Principal Engineer, Samsung Semiconductor

Abstract

Storage Intelligence allows Solid State Storage to work together with applications to provide enhanced performance and endurance of storage devices. Today standardization of initial features is nearly complete in the SCSI standard and is moving forward in the NVMe standard with SATA standardization close behind that. This presentation will describe the details that Storage Intelligence is standardizing today and bringing to the standardization process in the near future. Current work involves intelligent placement of data on the storage device, intelligent management of garbage collection, and management of the over provisioning space on the storage device. Future work will add In Storage Compute in the SNIA Object Drive TWG. Each of these four features will be described in detail.

Learning Objectives

  • Define the features of Storage Intelligence
  • Define the state of standards development in SCSI, NVMe, and SATA
  • Why are additional controls over storage necessary

TESTING

 

 

Thousands of Users - Do You Need to Test with Them or Not?

Christina Lara, Senior Software Engineer, IBM
Julian Cachua, Software Engineer, IBM

Abstract

NAS servers handling thousands of clients are becoming a common requirement in diverse environments ranging from public school systems to cancer research. The need to simulate tens of thousands of active users on SMB, NFS, and Object protocols is the first step to finding issues before the customer does.

The next level of complexity and realism is to simulate meaningful workloads from thousands of users in order to find scalability limits, lock collisions, cross-protocol limitations, performance best practices, and pre-sales configuration recommendations that are meaningful to your customer.

This presentation will outline some of the problems encountered, the tools and techniques for identifying relevant activity and economically replicating that traffic in the lab without requiring the physical aspects (ie: requiring the parents of 30,000 school children logging into the system to find out what classes Jack and Jill are enrolled in).

Learning Objectives

  • The importance of NAS protocol service interaction with the underlying filesystems.
  • Generate performance recommendations for Filesystem utilization of available disk configuration.
  • Different challenges testing at extremes
  • Methods for replicating representative client workloads

Object and Open Source Storage Testing: Finally, a viable approach

Tim Van Ash, VP of Product Management, Load DynamiX

Abstract

Interest in object-based and software defined storage, such as CEPH, OpenStack Swift, SNIA CDMI and Amazon S3, is expanding rapidly. Is it still just for non-mission critical or archiving applications or can it really be used for more performance-sensitive production application workloads? If so, how can one prove that these newer storage approaches can handle such workloads? Where are the performance limits? What are the testing parameters that the industry should be most concerned about? This session will discuss such topics and propose a new approach to testing the performance of object-based and open source storage.

Learning Objectives

  • Are object-based and software defined storage, such as CEPH, OpenStack Swift, SNIA CDMI and Amazon S3t still just for non-mission critical or archiving applications
  • Understand their performance limits
  • Learn testing parameters that the industry should be most concerned about

Parallelizing a Distributed Testing Environment

Teague Algie, Software Developer, Cleversafe

Abstract

Insuring software correctness is important in all development environments, but it is critical when developing systems that store mission-critical data. A common bottleneck in the development cycle is the turn-around time for automated regression tests. Yet as products mature, lines of code increase, and features are added, the complexity and number of tests required tends to grow dramatically.

This hampers quick detection and correction of errors, leading to development delays and missed deadlines. To address this problem, we embarked on a path to optimize and parallelize our automated testing framework; in this presentation we detail the result of our effort and the gains we achieved in streamlining our development process.

Learning Objectives

  • How to scale a test framework as the product increases in complexity
  • Identifying where time is spent in your test framework
  • Making the build system resilient to isolated failures
  • The utility of virtual machines in accelerating testing

VIRTUALIZATION

 

 

Avoiding Common Storage Development Pitfalls: A Review of the Approaches and Lessons Learned with the VMware vSphere Platform

Scott Davis, Chief Technology Officer, Infinio Systems

Abstract

Developing system-software technology for the VMware vSphere platform can be challenging, as there are a lot of constraints for development partners. Yet, we all know early-platform architecture design decisions can have far-reaching impact on product development options and future capabilities. When building an integrated storage product for VMware, one of the most important decisions is which architectural approach to take to interface with vSphere: virtual appliances, using the Pluggable Storage Architecture (PSA), kernel mode drivers/extensions or soon-to-be VMware's API for IO Filtering (VAIO). In this session, Scott Davis, CTO of Infinio, will weigh each approach’s benefits and challenges for the audience, as well as the trade offs associated with across virtual appliance, kernel-mode and hybrid architectures. Davis will also share lessons learned on the differences across developing for NFS, VMFS and VSAN data store types, as well as the pitfalls and best practices for implementing VAAI support.

Learning Objectives

  • Understand several approaches to VMware storage development: virtual appliance, PSA, kernel mode and VAIO
  • Learn how to evaluate the benefits and challenges of different architectures, specifically a virtual appliance, kernel-mode operation and a hybrid model
  • Find out how developing on NFS, VMFS and VSAN data store types are different challenges
  • Get insight on the best practices for VAAI implementation

Seamless Live Virtual Machine Migration by Mitigating Shared Storage Resource Constraint

Sangeeth Keeriyadath, Senior Staff Software Engineer, IBM
Prasanth Jose, Senior Staff Software Engineer, IBM

Abstract

Virtual Machine(VM) Migration is a widely acknowledged feature of most top-selling virtualization solutions; helping businesses tackle the hardware maintenance and server consolidation challenges without the need to affect solution availability. To reap the advantages of this flexibility, businesses have to plan their server networking and storage infrastructure, which includes cabling layout, well in advance. Providing connectivity and sharing of same storage resources across servers is a daunting task and often proves to be a bottleneck for the ability to migrate a VM, to a not planned destination server. The objective of this paper is to provide efficient workable solution for migrating VMs across different servers; in different kinds of data center layout, where storage may not be shared and/or have heterogeneous( i.e. FC / iSCSI / SAS / FCoE) connectivity. We are showcasing enhanced VM migration solution.

We were able to successfully extend existing VM Migration solution to achieve VM migration across servers with multiple kinds of storage connectivity and/or servers without common shared storage.

This implementation provides flexibility to businesses in migrating a VM anywhere in their data center irrespective of storage connectivity type and/or shared storage. This will provide big savings for businesses and enhances their flexibility for better management of data center and cloud based services.

Learning Objectives

  • Live Machine Migration without shared storage

I/O Virtualization in Enterprise SSDs

Zhimin Ding, Principle Engineer Design, Toshiba

Abstract

As PCIe-based SSDs become more and more powerful, they are increasingly being used in a virtualized server environment. IO virtualization is an efficient method for VMs to share the resources of the SSD as it allows VMs to directly communicate with the SSD virtual functions instead of going through the hyper-visor layer, thus improving throughput and reducing latency.

In this talk, the author will present methods to manage resource sharing within an IO virtualization-enabled PCIe SSD, with an NVMe front end. The goal is to achieve maximum utilization of SSD internal resources with minimum overhead, while providing customers the flexibility to configure the number of virtual functions and the capabilities associated with each virtual function. More specifically, the presenter will discuss unified architecture that allows both structurally separable and structurally non-separable resources to be shared by virtual functions within an IOV-enabled PCIe SSD.

Learning Objectives

  • IO Virtualization is important for enterprise SSD
  • Managing shared resource in SSD controller contributes to better IO virtualization performance
  • There is an unified architecture to manage different types of shared resources in IO virtualized SSD