Storage Developer Conference Abstracts

Break Out Sessions and Agenda Tracks Include:

Note: This agenda is a work in progress. Check back for updates on additional sessions as well as the agenda schedule.

 

BIG DATA

Can Storage fix Hadoop?

John Webster, Senior Partner, Evaluator Group

Abstract

Survey data shows that at least half of all enterprise data center Hadoop projects are stalled and that only 20% are actually making into production. This presentation looks at the problems with Hadoop that enterprise data center administrators encounter and how the storage environment can be used to fix at least some of these problems.

Learning Objectives

  • Understand the issues with current enterprise data center Hadoop implementations
  • Learn what the open source community and vendors are doing to fix the problems
  • Learn how storage devices and platforms can be used to address the problems and move Hadoop implementations forward

Hadoop: Embracing Future Hardware

Sanjay Radia, Co-founder, Hortonworks
Suresh Srinivas, Hortonworks

Abstract

This talk looks at the implications to Hadoop of future server hardware - and to start preparing for them. What would a pure SSD Hadoop filesystem look like, and how to get there via a mixed SSD/HDD storage hierarchy? What impact would that have on ingress, analysis and HBase? What could we do do better if network bandwidth and latency became less of a bottleneck, and how should interprocess communication change? Would it make the graph layer more viable? What would massive arrays of WIMPy cores mean -or a GPU in every sever. Will we need to schedule work differently? Will it make per-core RAM a bigger issue? Finally: will this let us scale Hadoop down?

<

 

BIRDS OF A FEATHER

NVM Programming Model - Next Steps

Abstract

The NVM Programming TWG recently celebrated its first birthday and is finalizing its first publication. We looking for suggestions from TWG members and non-members on NVM software extensions for future publications. The BOF includes a short overview of the TWG and NVM Programming Model specification, followed by a round-table suggestion of future work items.


Licensing Microsoft File Protocols

Abstract

Abstract coming soon


Green Storage – The Big Picture

Abstract

"The most expensive storage purchased is that which causes the deployment of another Data Center." George Crump, President & Founder Storage-Switzerland In a world of more, more, more, using 'less' to store all of it, is a crucial skill, which translates to a real competitive advantage for an organization.

Join us on September 17th at the MESS meetup as our panel of experts discuss the key techniques for reducing the power, cooling, space, and networking impact of storage, using new paradigms like:

  • IO density metrics
  • Geo-dispersal of data
  • Next-generation storage pods
  • Self-healing protection algorithms

...together contributing heartily to a simple goal ‘A Lower COST Footprint’
Improve your company's bottom line by attending the September MESS meet up!

 


Building a Linux Storage Appliance with Data Optimization

Abstract

Data deduplication and compression are no longer storage optimizations relegated to backup. They have become mainstream in primary and high performance (flash) storage. In this BOF session, we will discuss how to build a Linux storage appliance using standard Linux components (XFS, LVM2, and Linux iSCSI) and Permabit Albireo Virtual Data Optimizer (VDO). Whether you are designing cloud storage, backup solutions, or high performance flash arrays, this discussion will show you how to build a storage-optimized product in matter of hours.


Cloud Application Management for Platforms (CAMP)

Abstract

There are multiple commercial PaaS offerings in existence using languages such as Java, Python and Ruby and frameworks such as Spring and Rails. Although these offerings differ in such aspects as programming languages, application frameworks, etc., there are inherent similarities in the way they manage the applications that are deployed upon them. Cloud Application Management for Platforms (CAMP) specifies deployment artifacts and a RESTful API designed to both ease the task of moving applications between PaaS platforms as well as provide an interoperable mechanism for managing PaaS-based applications in a way that is language, framework, and platform neutral.



pNFS Open Discussion

Abstract

The last two years have been busy ones for pNFS. This BOF will provide an opportunity for NFSv4 and pNFS implementors, users, and interested parties to come together for open discussion. Potential topics for discussion include details of current pNFS implementations, pNFS scalability, and future directions for NFSv4 and pNFS.

 


BLOCK PROTOCOL

FCoE Direct End-Node to End-Node (aka FCoE VN2VN)

John Hufferd, Hufferd Enterprises

Abstract

A new concept has just been accepted for standardized in the Fibre Channel (T11) standards committee; it is called FCoE VN2VN (aka Direct End-Node to End-Node). The FCoE standard which specifies the encapsulation of Fibre Channel frames into Ethernet Frames is being extended to permit FCoE connections Directly between FC/FCoE End-Nodes. The tutorial will show the Fundamentals of the extended FCoE concept that permits it to operate without FC switches or FCoE Switches (aka FCF) and will describe how it might be exploited in Small, Medium or Enterprise Data Center environments -- including the "Cloud" IaaS (Infrastructure as a Service) provider environments.

Learning Objectives

  • The audience will gain a general understanding of the concept of using a Data Center type Ethernet for the transmission of Fibre Channel protocols without the need for an FCoE Forwarder (FCF).
  • The audience will gain an understanding of the benefits of converged I/O and how a Fibre Channel protocol can share an Ethernet network with other Ethernet based protocols and establishes a virtual FCoE link directly between the End-Nodes.
  • The audience will gain an understanding of potential business value and configurations that will be appropriate for gaining maximum value from this Direct End-Node to End-Node including the value of this protocol to the "Cloud" IaaS (Infrastructure as a Service) provider.

SCSI Standards and Technology Update

Marty Czekalski, President, SCSI Trade Association

Abstract

SCSI continues to be the backbone of enterprise storage deployments and has rapidly evolved by adding new features, capabilities, and performance enhancements. This talk will include an up-to-the-minute recap of the latest additions to the SAS standard and roadmaps. It will focus on the status of 12Gb/s SAS staging, advanced connectivity solutions such as MultiLink SAS™ and cover SCSI Express, a new transport of SOP (SCSI over PCIe). Presenters will also provide updates on new SCSI feature such as atomic writes and remote copy.

Learning Objectives

  • Attendees will learn how SAS will grow and thrive, in part, because of the Advanced Connectivity Roadmap, which offers a solid connectivity scheme based on the versatile Mini-SAS HD connector in addition to SAS Connectivity Management support.
  • Attendees will learn Express Bay improves how slot-oriented Solid State Drive (SSD) devices can be configured to boost I/O performance.
  • The latest development status and design guidelines for 12Gb/s SAS will be discussed as well as plans for extending SAS to 24Gb/s.
  • Attendees will learn the details of the standardization activity and architecture for SCSI over PCIe (SOP and PQI).

Extending SAS Connectivity in the Data Center

Bob Hansen, Storage Architecture Consultant, LHP Consulting Group

Abstract

Serial Attached SCSI (SAS) is the connectivity solution of choice for disk drives and JBODs in the data center today. SAS connections are getting faster while storage solutions are getting larger and more complex. Data center configurations and disaster recovery solutions are demanding longer cable distances. This is making it more and more difficult or impossible to configure systems using passive copper cables. This presentation discusses the application, limitations and performance of passive copper, active copper and optical SAS cabling options available today and those likely to be available in the next few years.

Learning Objectives

  • Review SAS network topologies for data center applications
  • Understand SAS connectivity options, limitations and performance
  • Looking to the future – discuss possible networking/connectivity changes for SAS 4

 


SCSI and FC Standards Update

Fred Knight, Standards Technologist, NetApp

Abstract

Fred Knight is a Principal Engineer in the CTO Office at NetApp. Fred has over 35 years of experience in the computer and storage industry. He currently represents NetApp in several National and International Storage Standards bodies and industry associations, including T10 (SCSI), T11 (Fibre Channel), T13 (ATA), IETF (iSCSI), SNIA, and FCIA. He is the chair of the SNIA Hypervisor Storage Interfaces working group, the primary author of the SNIA HSI White Paper, the author of the new IETF iSCSI update RFC, and the editor for the T10 SES-3 standard. Fred has received the INCITS Technical Excellence Award for his contributions to both T10 and T11. He is also the developer of the first native FCoE target device in the industry. At NetApp, he contributes to technology and product strategy and serves as a consulting engineer to product groups across the company. Prior to joining NetApp, Fred was a Consulting Engineer with Digital Equipment Corporation, Compaq, and HP where he worked on clustered operating system and I/O subsystem design.

Learning Objectives

  • This session will provide basic information on new capabilities being proposed for SCSI, iSCSI, SAS, and Fibre Channel. Attendees will be able to evaluate these new capabilities for possible use in their application environments, and to engage in an informed discussion with vendors about their use cases for these new capabilities.

 


Cloud

CDMI and Scale Out File System for Hadoop

Philippe Nicolas, Director of Product Strategy, Scality

Abstract

Scality leverages its own file system for Hadoop and replaces HDFS while maintaining HDFS API. Scality Scale Out File System aka SOFS is a POSIX parallel file system based on a symmetric architecture. This implementation addresses the Name Node limitations both in term of availability and bottleneck with the absence of meta data server with SOFS. Scality leverages also CDMI and continues its effort to promote the standard as the key element for data access. Scality capitalizes on 2 data protection techniques - Replication and Erasure Coding with Scality ARC - to boost data access, improve data durability and reduce hardware footprint and costs.

Learning Objectives

  • Realize the potential of CDMI
  • Illustrate a new usage of CDMI
  • Address Hadoop limitations
  • Introduce method to improve data durability

Lessons Learned Implementing Cross-protocol Compatibility Layer

Scott Horan, Integration Engineer, Cleversafe, Inc

Abstract

Over the past year, we have integrated our storage solution with a number of cloud and object storage APIs, including Amazon S3, WebDAV, OpenStack, and HDFS. While these protocols share much commonality, they also differ in meaningful ways which complicates the design of a cross-protocol compatibility layer. In this presentation, we detail how the various storage protocols are the same, how they differ, and what design decisions were necessary to build an underlying storage API that meets the requirements to support all of them. Further, we consider the lessons learned and provide recommendations for developing cloud storage APIs such as CDMI.

Learning Objectives

  • Features of the various popularly used cloud storage protocols
  • Differences between the protocols, how they authenticate, how they handle multi-part objects, how they guarantee integrity
  • Overview of the design decisions we made to support all these protocols on top of the same underlying storage

Profile Based Compliance Testing of CDMI: Approach, Challenges & Best Practices

Sachin Goswami, Solution Architect and Storage COE Head, Hi Tech TATA Consultancy Services

Abstract

Cloud Data Management Interface Specifications are now moving towards profile based categories. This is due to the increased focus where organizations are planning to adopt profile based CDMI in their products, example Service, ID and Self Storage Management profiles. TCS has been focusing on implementing ‘CDMI Automated Test Suite’ and with new developments is focusing towards incorporating profiling to the same. In this proposal we will share the approach and challenges for testing of profile based scenarios towards CDMI profile based compliance, of the cloud products. Also, we will share additional challenges / learning towards testing of CDMI Products for compliance. These learning’s will serve as a best practice and a ready reference for other organizations in developing their own CDMI product suit.

Learning Objectives

  • Understanding CDMI Profile based Specification.
  • Understanding the Profile Based testing approach
  • Understanding of existing gaps that are present in CDMI Profile based specification.

Transforming Cloud Infrastructure to Support Big Data

Dr. Ying Xu, R&D Lead, Aspera Inc

Abstract

Cloud systems promise virtually unlimited, on-demand increases in storage, computing, and bandwidth. As companies have turned to cloud-based services to store, manage and access big data, it has become clear that this promise is tempered by a series of technical bottlenecks: transfer performance over the WAN, HTTP throughput within remote infrastructures, and size limitations of the cloud object stores. This session will discuss principles of cloud object stores, using examples of Amazon S3, Microsoft Azure, and OpenStack Swift, and performance benchmarks of their native HTTP I/O. It will share best practices in orchestration of complex, large-scale big data workflows. It will also examine the requirements and challenges of such IT infrastructure designs (on-premise, in the cloud or hybrid), including integration of necessary high-speed transport technologies to power ultra-high speed data movement, and adoption of appropriate high-performance network-attached storage systems.

Learning Objectives

  • Attendees will learn methods to overcome the technical bottlenecks associated with using cloud-based services. Attendees will also gain insight into how to take advantage of the cloud for storing, managing and accessing big data using high-speed transport technologies and high-performance NAS systems.
  • Attendees will learn what it takes to plan and implement complex, large-scale data workflows. This includes the requirements and challenges of designing IT infrastructure, how to ensure secure file transfers and storage, and real-world examples from industry leaders.
  • Attendees will gain a better understanding of the differing requirements and challenges of on-premise, cloud and hybrid infrastructure designs.

Resilience at Scale in the Distributed Storage Cloud

Alma Riska, Consulting Software Engineer, EMC

Abstract

The cloud is a diffuse and dynamic place to store both data and applications, unbounded by data centers and traditional IT constraints. However, adequate protection of all this information still requires consideration of fault domains, failure rates and repair times that are rooted in the same data centers and hardware we attempt to meld into the cloud. This talk will address the key challenges to a truly global data store, using examples from the Atmos cloud-optimized object store. We discuss how flexible replication and coding allow data objects to be distributed and where automatic decisions are necessary to ensure resiliency at multiple levels. Automatic placement of data and redundancy across a distributed storage cloud must ensure resiliency at multiple levels, i.e., from a single node to an entire site. System expansion must occur shamelessly without affecting data reliability and availability. All these features together ensures data protection while fully exploiting the geographic dispersion and platform adaptability promised by the cloud.

Learning Objectives

  • Learn how to build truly large distributed storage systems
  • Understand fault domains, failure considerations
  • Understand how to reason about data resilience at large scale

Open-source CDMI-compliant Proxy: "Stoxy"

Ilja Livenson, KTH Royal Institute of Technology

Abstract

We present our on-going effort in development of a open-source server (Stoxy - STOrage proXY), which exposes CDMI-compliant interface on the frontend and allows to store and manage data on several public and private cloud backends, incl. AWS and MS Azure. The work is based on a CDMIProxy prototype developed in EU VENUS-C project with a continuos development effort sponsored by EGI Inspire project as well as commercial companies. Presentation will cover the architecture of Stoxy and highlight certain key components, esp. related to the data streaming. In addition, we shall give initial experience from integrating Stoxy with the other products (cloud managers, storage servers) inside the EGI Federated cloud task force.

Learning Objectives

  • Introduction of the open-source product and its use cases
  • Experience from implementing the streaming proxy server to public cloud backends
  • Experience from integration of CDMI with a federated cloud test-bed
  • Interoperability testing with OpenStack Swift

LTFS and CDMI - Tape for the Cloud

David Slik, Technical Director, NetApp

Abstract

LTFS tape technology provides compelling economics for bulk cloud storage and transportation of data. This session provides an overview of the use cases identified by the joint LTFS and Cloud Technical Working Group, including when tape provides lower-cost alternatives to network and disk-based transportation, and when tape provides lower-cost alternatives to disk-based storage and archiving. This session will introduce standardization efforts underway to allow for simple tape-based bulk data transport to, from, and between clouds, and standardized of how to store rich object data on standard LTFS tapes.

Learning Objectives

  • Learn about the advantages of tape for bulk data movement to, from and between clouds
  • Learn about the advantages of tape for bulk data storage within clouds
  • Learn about standardization activities around tape-based bulk data transport
  • Learn about how CDMI objects can be stored on LTFS tapes

CDMI Federations, Year 4

David Slik, Technical Director, NetApp

Abstract

In addition to standardizing client-to-cloud interactions, the SNIA Cloud Data Management Interface (CDMI) standard enables a powerful set of cloud-to-cloud interactions. Federations, being the mechanism by which CDMI clouds establish cloud-to-cloud relationships, provide a powerful multi-vendor and interoperable approach to peering, merging, splitting, migrating, delegating, sharing and exchange of stored objects. In last year's SDC presentation, bi-directional federation between two CDMI-enabled clouds was discussed and demonstrated. For year four, we will discuss how CDMI federation when combined with CDMI versioning, enables mobile and web-based applications to synchronize data with clouds, effectively allowing clients to create "mini-clouds" local to the client. This architectural approach allows clients to easily store, cache, and merge cloud-resident content, provide disconnected operation, and provides a foundation for application-specific conflict resolution.

Learning Objectives

  • Learn how clients can use CDMI federation to synchronize local and cloud-resident data stores
  • Learn how versioning and globally unique identifiers enables multiple concurrent writers without synchronization
  • Learn techniques that enable automated conflict detection and application-specific conflict resolution
  • See a multi-client demonstration how of CDMI Federation simplifies application data management

Windows Azure Storage - Speed and Scale in the Cloud

Joe Giardino, Senior Development Lead, Microsoft

Abstract

In today’s world that is increasingly dominated by mobile and cloud computing application developers require durable, scalable, reliable, and fast storage solutions like Windows Azure Storage. This talk will cover the internal design of the Windows Azure Storage system and how it is engineered to meet these ever growing demands. This session will have a particular focus on performance, scale, and reliability. In addition, we will cover patterns & best practices for developing performant solutions on storage that optimize for cost, latency, and throughput. Windows Azure Storage is currently leveraged by clients to build big data and web scale services such as Bing, Xbox Music, SkyDrive, Halo 4, Hadoop, and Skype.

Learning Objectives

  • Windows Azure Storage Fundamentals
  • Designing an application that scales

CDMI, The Key Component of Scality Open Cloud Access

Giorgio Regni, CTO, Scality

Abstract

Announced during SNIA SDC 2012, Scality Open Cloud Access, aka OCA, was the first converged access methods between file and object mode, from local or remote site. Dewpoint, the Scality CDMI server, demonstrated during previous 3 CDMI annual plugfests during SDC, continues to be the pivotal component of this strategy. During last year, Scality leverage CDMI to build the Scality solution for Hadoop and plan to announce a few other innovations.

Learning Objectives

  • Illustrate utilization of CDMI
  • How to build convergent data access methods

Architecting An Enterprise Storage Platform Using Object Stores

Niraj Tolia, Chief Architect, Maginatics

Abstract

While object storage systems such as S3 and Swift are exhibiting rapid growth, there is still an impedance mismatch between their feature set and enterprise requirements. This talk dives into the design and architecture of MagFS: a strongly consistent and multi-platform distributed file system that layers itself on top of multiple object storage systems. In particular, it covers the challenges of using eventually consistent object stores, optimizing both data and metadata traffic for wide-area network communication and mobile devices, and how MagFS delivers an on-premises security model while still being able to leverage off-premises storage. This talk also discusses how specific enterprise requirements have influenced the technical design of MagFS and some of the surprises we encountered during our design and implementation.


COSBench: A Benchmark tool for Cloud Storage

Yaguang Wang, Sr. Software Engineer, Intel

Abstract

With object storage services becoming increasingly accepted as one new offering comparing to traditional file or block systems, it is important to effectively measure the performance of these services. Thus people can compare different solutions or tune their systems for better performance. However, little has been reported on this specific topic as yet. To address this problem, we developed COSBench (Cloud Object Storage Benchmark), a benchmark tool that we are currently working on inside Intel for cloud object storage services. In addition, we will share the status for CDMI supporting, and share results of the experiments we have performed so far.

Learning Objectives

  • To know what's cosbench, and how does it work
  • To know what kind of information can get from cosbench
  • To know the key different points between CDMI and other object store interfaces like S3

DATA MANAGEMENT

Virtual Machine Archival System

Parag Kulkarni, VP Engineering, Calsoft Inc.
Dr. Anupam Bhide, CEO Co-Founder, Calsoft Inc.

Abstract

Popular server virtualization vendors have enabled integration with backup and recovery solutions, but not with virtual machine archival systems. Server virtualization system should have knowledge of various storage systems attached to it such as SSD, HDD, Object Storage, Tape library and Cloud. For instance, VMWare ecosystem.

We propose ‘Virtual Machine Archival System’ with following functionality:

  • Decision on which type of storage should be used as destination
  • Labeling locations of VMs data
  • Discovery interface and VM archival policies

Server virtualization system facilitates following functionality:

  • Storage types available around
  • Archival link creation in file system containing VM data
  • Passing on archival and restore request to archival system
  • GUI integration for archival system

Learning Objectives

  • Understand virtual machine archival process
  • Apprehend importance of server virtualization

Data Deduplication as a Platform for Virtualization and High Scale Storage

Adi Oltean, Principal Software Design Engineer, Microsoft

Abstract

The primary data deduplication system in Windows Server 2012 is designed to achieve high deduplication savings at low computational overhead on commodity storage platforms. In this talk, we will build upon that foundational work and present new techniques to scale primary data deduplication on both the primary data serving and optimization pathways. This will include hardware accelerated performance improvements for hashing and compression, better file system integration to reduce write path overheads and optimize live files, and deduplication aware caching to mitigate disk bottlenecks. We will show how this enables deduplication to be leveraged as a platform for storage virtualization.

Learning Objectives

  • Fundamental building blocks for a primary data deduplication system.
  • Deduplication data serving for “live data” as a storage layer for virtualization workload.
  • Optimization of data at high scale and little to zero impact on compute resources of virtualization platform.
  • Utilizing data deduplication as a means to implement an efficient system cache.

DEVELOPMENT METHODOLOGIES

Using Big Data Analytic for Agile Software Development

Ashish Goyal, Pricipal Software Engineer, EMC

Abstract

Reporting is a critical element in agile software development in large projects with distributed teams and strict governance requirements. Reporting includes software quality and integration effort of the most recent build. Developers provide information about code changes, source-code analysis, and unit test results. The testing team provides information about test runs. Release engineers provide information on quality of current build, uptime, and defects logged. Historical data can also be used for comparing quality and testing trends. All these information comes from independent sources. To provide better insight into state of software development to customers, management and team members, Big Data analytic can be used to analyze the data. Analytic can help to identify test cases that break most frequently, changes in source code files, which might cause unit test and functional test to break, identifying area of code base which requires additional test or redesign. Historical and Current defect reports can be used to estimate the number of defects customer might report for new release.

Learning Objectives

  • Issue affecting agile software development for distributed teams
  • How Big Data Analytic provides insight into software quality
  • Provide view to Management if product development is on track

A Method to Establish concurrent Rapid Development Cycle and High Quality in a Storage Array System Environment

M. K. Jibbe, Director of Quality Architect Team for NetApp APG Products, NetApp
Kuok Hoe Tan, QA Architect, NetApp

Abstract

Shift Left is a combination of a change in development and validation approaches and key engineering framework improvements to ensure that each phase of the release process provides a solid foundation for the subsequent phase till final product release. As the name implies, the goal is to move development and validation earlier into the release cycle to ensure content design, development, validation and bug fixes are occurring when the bulk of the Engineering resources are engaged and available. Each release phase is focused on delivering the building blocks for a successful and high quality release. We have adopted industry standard best practices and methodologies to bolster our engineering framework and process to support this transformation. Agile forms the cornerstone of our new content development scrum teams with the goal of maintaining a potentially shippable product early and consistently.

Learning Objectives

  • Learn about early validation from early development to full system validation
  • Learn about the focused targeted outcome for each phase of the release process
  • Learn about Key release metrics to track progress to key outcome for each phase of the release process
  • Learn about an internalized industry standard best practices and methodologies to form the framework as a foundation to drive continuous improvements for the release process

Code Coverage as a Process

Aruna Prabakar, Software Engineer, EMC
Niranjan Page, Engineering Manager, EMC

Abstract

There are many tools to get code coverage for different languages, but the data is of no use if not used to improve the quality of the product through testing. In this presentation I will be sharing our EMC/DataDomain successful process/infrastructure. After this presentation the audience would be able to start thinking about code coverage from both Development and QA perspective if they don’t already have one.


DISTRIBUTED STORAGE

Distributed Storage on Limping Hardware

Andrew Baptist, Lead Architect, Cleversafe, Inc.

Abstract

It is easy to design storage systems that assume nothing bad ever happens. It is marginally harder to design one that assumes nodes are either available or not. What is difficult is designing storage systems that handle how nodes fail in the real world. Such "limping nodes" may respond slowly, occasionally, or unpredictably; they are neither entirely failed nor entirely healthy. This presentation covers the mechanisms we developed for dealing with limping nodes in a distributed storage system. These techniques allow limping nodes to be tolerated with negligible impact on performance, latency, or reliability. We introduce some of the intelligent writing techniques we created for this purpose, which include: write thresholds, impatient writes, optimistic writes, real-time writes, and lock-stealing writes.

Learning Objectives

  • How nodes fail in the real world
  • What can happen if a distributed storage system doesn't handle limping nodes well
  • Techniques we have developed for better handling of limping nodes, and the results we have obtained

Getting the Most out of Erasure Codes

Jason Resch, Lead Software Engineer, Cleversafe, Inc.

Abstract

Erasure codes are a recent addition to many storage technologies. They provide increased reliability with less overhead. Yet, they are not without downsides. Selecting the best parameters for erasure coding is a complex optimization problem. As one varies the threshold, write threshold, number of pieces, system capacity, and site count, there may be drastic effects on reliability, availability, storage overhead, rebuilding cost, and CPU expense. Selecting erasure code parameters without weighed consideration may have catastrophic results. In this presentation we present the techniques we developed and use for designing erasure coded systems with the best combination of storage overhead, computational efficiency, reliability, availability, and rebuilding cost for any given system constraints. Finally, we introduce some advanced techniques for reducing rebuilding cost.

Learning Objectives

  • What the various parameters are in an erasure coded system
  • The interrelationships between the parameters and how they effect the system's properties
  • Our methods for selecting the parameters to optimally achieve the goals for the system
  • Advanced techniques for mitigating rebuilding cost in an erasure coded system

Method to Establish a High Availability and High Performance Storage Array in a Green Environment

M. K. Jibbe, Director of Quality Architect Team for All APG Product, NetApp
Marlin Gwaltney, Quality Architect, NetApp

Abstract

The method of utilizing Dynamic Disk Pools using the Controlled Replication Under Scalable Hashing data mapping algorithm integrated with Solid-State Drives (SSD) in a storage array provides not only a high availability and high performance but also an environmentally friendly “green” system. SSDs are efficient and require less system cooling in the same footprint as a system with the mechanical HDDs. Dynamic Disk Pools using an algorithm which distributes data, parity information, and spare capacity across a pool of drives instead of using the standard RAID sequential data striping algorithms, so they are able to use every drive in the pool for the intensive process of rebuilding data in the event of drive failure(s). A key benefit of Dynamic Disk Pools is the faster rebuild times (up to 8 times shorter than standard RAID algorithms) which leads to improved data protection and lower I/O performance penalty, since the system is in a vulnerable, degraded state for a much shorter period of time. Dynamic Disk Pools also include flexible configuration options (from only one pool with all drives to multiple pools in a system) to optimize the system for the end customer requirements. The shorter rebuild times of Disk Pools are further increased with the usage of the higher performing and higher reliable SSD drives. The individual advantages of utilizing SSD Drives and/or Dynamic Disk Pools in a storage array are further magnified when they are integrated together to provide a higher availability/reliability, higher performance, and more environmentally-friendly system. These 3 technologies can be leveraged together to build an extremely flexible, cost effective, scalable storage solution.

Learning Objectives

  • Learn about the Dynamic Disk Pool method feature (Design and benefits)
  • Learn how Flash Read Cache method optimizes the investment in the uses of solid state drives
  • Learn about Solid State Disk performance and power requirement
  • Learn about how The above 3 technologies can be leveraged together to build an extremely flexible, cost effective, scalable storage solution
  • Learn about how The above 3 technologies can be leveraged together to build an extremely flexible, cost effective, scalable storage solution

LRC Erasure Coding in Windows Storage Spaces

Cheng Huang, Researcher, Microsoft

Abstract

RAID is the standard approach for fault tolerance among multiple disk drives and has been around for decades. However, new hardware trends, including the advent of hard disk drives (HDDs) with huge capacity, widely adoption of solid state drives (SSDs) with fast I/O, etc., have created new opportunities to optimize fault tolerance schemes. Windows now introduces a new fault tolerance scheme in its Storage Spaces technology. The new scheme is developed based on a novel erasure coding technology, called Local Reconstruction Code (LRC). Compared to RAID under same durability metric, LRC significantly reduces rebuild time, while still keeping storage overhead very low. In addition, LRC offers much more flexibility in balancing rebuild time and storage overhead. The presentation will provide an overview of the Windows Storage Spaces technology, cover the design of its fault tolerance mechanism, discuss the implementation of LRC in detail and share experiences learned from real-world workloads.

Learning Objectives

  • Refresh my knowledge of erasure coding. (Some knowledge is assumed – this is *not* a tutorial on Erasure Coding. For basics, refer to the tutorial at USENIX FAST 2013 – “Erasure Coding for Storage Applications”, by Plank and Huang. )
  • Get an overview of Windows Storage Spaces technology and its fault tolerance mechanism.
  • Understand the implementation of LRC and its benefits in clustered storage systems.

FILE SYSTEMS

Snapshot Cauterization

Sandeep Joshi, Manager, EMC
Narain C. Ramdass, Manager, EMC

Abstract

Snapshot cauterization and MetaSnaps When a user takes a snapshot of a filesystem, it captures all the data, some of which the user may not want to retain. Presently there are no mechanisms for a user to delete this data without deleting the whole snapshot. We present methods to cauterize such unwanted data from a snapshot, to reclaim space. This technique can be used to build more features which will be useful for file system analytics.

Learning Objectives

  • Snapshots
  • Need for Snapshot cauterization
  • Maintaining Snapshot consistency
  • Future enhancements

Multiprotocol Locking and Lock Failover in OneFS

Aravind Velamur Srinivasan, Senior Software Engineer, EMC, Isilon Systems

Abstract

This talk will examine the details on how multiprotocol locking is implemented in a distributed clustered file system such as Isilon’s OneFS and also looks into the existing lock failover implementation in OneFS for NFS and how it can be extended for implementing lock failover for SMB3. A clustered file system such as Isilon’s OneFS can have multiple clients accessing the server using different protocols such as SMB and NFS. A robust and efficient distributed lock manager is necessary is necessary to achieve both protocol correctness and data consistency in the presence of multi-protocol access to data/files. We also need a failover mechanism to implement the failover semantics of these protocols so that the locks are not lost even when a node in the cluster goes down. This talk will examine the details of such a locking mechanism in OneFS.

Learning Objectives

  • Details of the distributed lock manager in OneFS
  • Challenges in implementing multiprotocol locking on a clustered file system and how it is made possible in OneFS
  • Details of the design and implementation of lock failover for NFS
  • Challenges in extending the lock failover for SMB

HDFS - What is New and Future

Sanjay Radia, Co-founder, Hortonworks
Suresh Srinivas, Hortonworks

Abstract

Hadoop 2.0 offers significant HDFS improvements: new append-pipeline, federation, wire compatibility, NameNode HA, performance improvements, etc. We describe these features and their benefits. We also discuss development that is underway for the next HDFS release. This includes much needed data management features such as Snapshots and Disaster Recovery. We add support for different classes of storage devices such as SSDs and open interfaces such as NFS; together these extend HDFS as a more general storage system. As with every release we will continue improvements to performance, diagnosability and manageability of HDFS.


Snapshots for Ibrix - Highly Distributed Segmented Parallel FS

Boris Zuckerman, Distinguish Technologist, HP

Abstract

This presentation explores designing ‘native snapshots’ for scale-out segmented parallel file systems (Ibrix). An appropriate model of snapshots requires flexibility and fluidity to allow easy selection of objects, reliability to assure logical unity of such subsets. We scale linearly adding servers and segments fundamentally by limiting the number of objects participating in operations and de-centralizing control over meta-data. With snapshots, associated state transition has to affect not only directly referenced objects, but has to be immediately propagated to all the descendant nodes controlled by a large number of other servers. We also look into recovery, achieving quick rollback logically resetting state of the subspace to a desired point in time and allowing corresponding longer running cleanup processes to finish in the background.

Learning Objectives

  • Expose fundamentals of highly distributed segmented parallel file system architecture
  • Review the challenges of implementing snapshots for such environment
  • Define Snap Identities as dynamically inheritable attributes
  • Logical preservation of name components in snapshots and Avoid large scale data flushes at snap time

A Brief History of the BSD Fast Filesystem

Dr. Marshall Kirk McKusick, Computer Scientist Author and Consultant

Abstract

This talk provides a taxonomy of filesystem and storage development from 1979 to the present with the BSD Fast Filesystem as its focus. It describes the early performance work done by increasing the disk block size and by being aware of the disk geometry and using that knowledge to optimize rotational layout. With the abstraction of the geometry in the late 1980's and the ability of the hardware to cache and handle multiple requests, filesystems performance ceased trying to track geometry and instead sought to maximize performance by doing contiguous file layout. Small file performance was optimized through the use of techniques such as journaling and soft updates. By the late 1990's, filesystems had to be redesigned to handle the ever growing disk capacities. The addition of snapshots allowed for faster and more frequent backups. The increasingly harsh environment of the Internet required greater data protection provided by access-control lists and mandatory-access controls. The talk concludes with a discussion of the addition of symmetric multi-processing support needed to utilize all the CPUs found in the increasingly ubiquitous multi-core processors.


Scale-out Storage Solution

Mahadev Gaonkar, Technical Architect, iGATE

Abstract

Today, data is growing at an exponential rate and the need to provide an efficient storage mechanism has become more critical than ever. In this presentation, we will discuss about a scale out storage solution intended to address small and medium businesses in a cost effective manner. This is a Linux based software-only solution that works on commodity hardware. It is a POSIX compliant solution and provides file storage through CIFS/ NFS interfaces. The entire solution is designed to have small footprint and easy installation on available Linux machines. This paper presents technical details of the solution and implementation challenges. In addition, the paper will also discuss about tools and techniques used to test scale out storage product.

Learning Objectives

  • Overview of the Scale-out storage solution
  • Solution details - architecture of the Distributed File system, Storage workload distribution mechanism, Key optimizations to achieve higher performance
  • Testing Scale-out Storage - Challenges in testing Scale out products, Stress, scalability and performance testing and Various open source tools

Advancements in Windows File Systems

Andy Herron, Principal Software Developer, Microsoft

Abstract

There are some advances, refinements, and improvements in Windows File Systems coming, which we'll be able to talk about at SDC-2013. Stay tuned for more details....


Cluster Shared Volumes

Vladimir Petter, Principal Software Design Engineer, Microsoft

Abstract

Cluster Shared Volumes is a cluster file system for the Windows Hyper-V and File Server workloads. It enables concurrent access to volumes and files from any node in a Windows Server Failover Cluster. In this session, we will describe how Cluster Shared Volumes leverages and extends existing Windows technology, such as NTFS for metadata and storage allocation, SMB 3.0 for high-speed interconnect, Volume Snapshot Service for distributed backups, oplocks for cache coherency, and failover clusters for multi-node coordination.

Learning Objectives

  • Review the scenarios targeted by Cluster Shared Volumes.
  • Explain how Cluster Shared Volumes is layered between client applications and NTFS.
  • Understand the conditions under which multiple cluster nodes can concurrently access NTFS volumes at block level.
  • Describe how SMB 3.0 and failover clusters are used for efficient solve multi-node problems, such as metadata updates and snapshot coordination.

Balancing Storage Utilization Across a Global Namespace

Manish Motwani, Lead Software Developer, Cleversafe, Inc.

Abstract

Global namespaces represent the pinnacle of scalability as no central authority need be consulted to locate or update a resource. Just as DNS has enabled the Internet to scale to billions of hosts, global namespaces have much utility for scaling storage systems to Petabytes and beyond. Yet there are trade-offs to be made. The less dynamic the namespace the greater the scalability, but a more rigid namespace restricts data migration and rebalancing choices. We describe the trade-offs we made in designing a namespace that scales to Exabytes and how we deal with storage imbalance and expansion.

Learning Objectives

  • What a global namespace is, and the benefits they provide over traditional metadata or lookup services.
  • Limitations imposed by a rigid namespace, in terms of where data can be migrated or moved to without causing the namespace mapping to change or expand in size.
  • The design of our global namespace and algorithms employed to balance utilization across a storage system of thousands of nodes.

HARDWARE

PCI Express and Its Interface to Storage Architectures

Ron Emerick, Principal HW Engineer, Oracle

Abstract

PCI Express Gen2 and Gen3, IO Virtualization, FCoE, SSD, PCI Express Storage Devices are here. What are PCIe Storage Devices – why do you care? This session describes PCI Express, Single Root IO Virtualization and the implications on FCoE, SSD, PCIe Storage Devices and impacts of all these changes on storage connectivity, storage transfer rates. The potential implications to Storage Industry and Data Center Infrastructures will also be discussed.

Learning Objectives

  • Knowledge of PCI Express Architecture, PCI Express Roadmap, System Root Complexes and IO Virtualization.
  • Expected Industry Roll Out of latest IO Technologies and required Root Complex capabilities.
  • Implications and Impacts of FCoE, SSD and PCIe Storage Devices to Storage Connectivity. What does this look like to the Data Center?
  • IO Virtualization connectivity possibilities in the Data Center (via PCI Express).

PCI Express IO Virtualization Overview

Ron Emerick, Principal HW Engineer, Oracle

Abstract

PCI Express IO Virtualization Specifications working with System Virtualization allowing multiple operating systems running simultaneously within a single computer system to natively share PCI Express Devices. This session describes PCI Express, Single Root and Multi Root IO Virtualization. The potential implications to Storage Industry and Data Center Infrastructures will also be discussed.

Learning Objectives

  • Knowledge of PCI Express Architecture and Performance Capabilities, System Root Complexes and IO Virtualization
  • The ability for IO Virutalization to change the use of IO Options in systems.
  • How does PCIe Based Storage device play in IO Virtualization
  • IO Virtualization connectivity possibilities in the Data Center (via PCI Express).

Addressing Shingled Magnetic Recording drives with Linear Tape File System

Albert Chen, Western Digital
Jim Malina, Technologist, Western Digital

Abstract

Shingled Magnetic Recording is a disruptive technology. It increases the capacity of the drive at the expense of not supporting random writes. This limits the adoption of SMR devices in traditional systems with write in place file systems. We can address the write expectations of SMR through various layers of abstraction, from application to firmware. A high abstraction layer provides more room for innovation and a more consistent performance guarantee. Thus, one potential implementation is through the familiar POSIX/Unix file system interface which provides a stable and familiar abstraction for both the storage vendor and user. In this presentation we would like to share some of the thoughts, lessons and experiences that we went through in making Linear Tape File System work with WD SMR drives.

Learning Objectives

  • What is Shingled Magnetic Recording and what are its benefits and requirements.
  • What are some potential approaches to address SMR requirements in software.
  • What WD learned in utilizing Linear Tape File System for SMR drives.

InfiniBand Architectural Overview

David Deming, President, Solution Technology

Abstract

InfiniBand Architecture Overview.
This session will provide an overview of the entire InfiniBand Architecture including application, transport, network, link, and physical layers. This session is meant to update the student on current and future enhancements to the IB architecture including 8 Gbps links and RoCE.


Infiniband Verbs and Memory Management - RDMA

David Deming, President, Solution Technology

Abstract

InfiniBand RDMA Protocol and Memory Management.
This session overviews the verb interface and RDMA protocol including how memory regions and windows are used for inter-processor communication. NVM and SCSI Express both utilized similar programming interfaces (queue pairs) to communicate between host RAM and either another hosts' RAM or to a non-volatile storage device.



Introduction to HP Moonshot

Tracy Shintaku, Distinguished Technologist, HP Server Engineering R&D, HP

Abstract

HP’s Moonshot represents a series of products designed to ease and expedite the onramp of emerging low-power, low-cost, high-density, high-volume technologies in the data center. HP’s first Moonshot System breaks new ground in terms of power efficiency and compute density with a flexible cartridge-based form factor.

Learn about the capabilities of HP Moonshot and emerging technologies as we explore the genesis of the platform, where it could go and what it could mean for storage in the low power, highly efficient data center.

Learning Objectives

  • Learn about HP Moonshot architecture and  catalyzing Industry trends. Discuss what Moonshot could mean for storage and storage related applications.

HOT TOPICS

Can Your Storage Infrastructure Handle the Coming Data Storm?

Amritam Putatunda, Technical Marketing Engineer, Ixia

Abstract

In day-to-day operations, a storage infrastructure must effectively perform unique tasks, like data storage, backups, access validations, edits, deletes, analysis, etc. Any delay introduced at the storage level impacts user quality of experience (QoE). To ensure effective storage infrastructure, you must evaluate and optimize the system’s ability to perform under extreme environments. Strong and resilient storage not only must handle today’s data storm – business-critical financial transactions, the fire hose of big data, on-demand video and gaming, etc. – but also stores and protects the most precious artifacts of modern-world data.


OpenStack Cloud Storage

Dr. Sam Fineberg, Distinguished Technologist, Hewlett-Packard Company

Abstract

OpenStack is an open source cloud operating system that controls pools of compute, storage, and networking. It is currently being developed by thousands of developers from hundreds of companies across the globe, and is the basis of multiple public and private cloud offerings. In this presentation I will outline the storage aspects of OpenStack including the core projects for block storage (Cinder) and object storage (Swift), as well as the emerging shared file service. It will cover some common configurations and use cases for these technologies, and how they interact with the other parts of OpenStack. The talk will also cover new developments in Cinder that enable a variety of storage devices and storage fabrics to be used


KEY NOTE AND FEATURED SPEAKERS

The Impact of the NVM Programming Model

Andy Rudoff, SNIA NVM Programming TWG

Abstract

As exciting new Non-Volatile Memory (NVM) technologies emerge, the SNIA NVM Programming Technical Workgroup (TWG) has been working through the applicable programming models. Andy will talk about the impact these programming models will have on the industry, focusing especially on the more disruptive areas of NVM like Persistent Memory.


Windows Azure Storage – Scaling Cloud Storage

Andrew Edwards, Principal Architect, Windows Azure Storage, Microsoft

Abstract

In today’s world that is increasingly dominated by mobile and cloud computing application developers require durable, scalable, reliable, and fast storage solutions like Windows Azure Storage. This talk will cover the internal design of the Windows Azure Storage system, how it is engineered to meet these ever growing demands, and lessons learned from operating at scale.


Optical Storage Technologies: The Revival of Optical Storage”

Ken Wood, CTO – Technology & Strategy Office of Technology and Planning, Hitachi Data Systems

Abstract

Optical storage is seeing a resurgence in new industry verticals for it’s improved and unique preservation and environmental qualities. Recent developments have increased capacities and functionality while maintaining decades of backwards compatibility. This is due to the wide range of industries and markets that support this medium.


Hypervisors and Server Flash

Satyam Vaghani, CTO, PernixData

Abstract

Hypervisors and server flash is an important but inconvenient marriage. Server flash has profound technology and programming implications on hypervisors. Conversely, various hypervisor functions make it challenging for server flash to be adopted in virtualized environments. In this talk, we will present specific hypervisor design areas that are challenged by the new physics of storage presented by server flash, and possible solutions. We will discuss the motivation and use cases around a software layer to virtualize server-flash and make it compatible with clustered hypervisor features like VM mobility, high availability, distributed VM scheduling, data protection, and disaster recovery. Finally, we will present some empirical results from one such flash hypervisor (FVP) implemented at PernixData, and its potential long term impact on data center storage design.


Migrating to Cassandra in the Cloud, the Netflix Way

Jason Brown, Senior Software Engineer, Netflix

Abstract

Netflix grew up using the traditional enterprise model for scaling: monolithic web application on top of a monolithic database in a single datacenter, buying bigger boxes, stuffing more user data into session memory. It all worked great when Netflix had less than 1 million customers (and rapidly growing). Then one day that model failed us, miserably. The single-point-of-failure bug hit us hard, and we were hobbled for days. Since then, Netflix has reinvented it's technology stack from top to bottom - abandoning the single, monolithic web application for tiered distributed services, as well as moving beyond our SPOF database to more resilient architectures.

In this talk I'll be discussing my involvement with Cassandra at Netflix, first as a user of this new system, then as a developer of it. I'll discuss how we migrated from our traditional datacenter to the cloud, how we store and backup data, and the problems of rapidly scaling out a persistence layer under a burgeoning distributed architecture.


Platform as a Service and the Newton: One of These Things is Just Like the Other

Gerald Carter, Senior Consulting Engineer, EMC, Isilon Storage Division

Abstract

Platform as a Service (PaaS) offerings, like the Apple Newton, launched at a time when technology had not matured to the point necessary to cross the chasm to the early majority and into mass markets. Successes did exist, but were limited to specialized applications already targeted at a vendor’s existing platform. Infrastructure as a Service (IaaS), and its successor of software defined entities, are necessary intermediate steps towards decoupling application development from operational overhead. This talk will explore what the future will look like when developers can once again focus solely on applications and interfaces and turn a blind eye to operations


Storage Infrastructure Performance Validation at Go Daddy – Best Practices from the World’s #1 Web Hosting Provider

Julia Palmer, Storage Protection and Data Manager, Go Daddy
Justin Richardson, Senior Storage Engineer, Go Daddy

Abstract

Infrastructure is evolving rapidly these days, especially for storage professionals. A flurry of new technologies such as SSDs and tiering promise faster, cheaper, and more cost-effective storage solutions. Storage-as-a-service offers a new blueprint for flexible, optimized storage operations. Go Daddy is taking full advantage of these opportunities with continual innovation.

Attend this presentation to hear how Go Daddy utilized an innovative new approach to storage infrastructure validation that enabled them to accelerate the adoption of new technologies and reduce costs by nearly 50% while maintaining 99.999% uptime for their 28 PB of data. The new process empowers Go Daddy with the insight they need to optimize both service delivery and vendor selection. Audience members will also learn how to evaluate storage workloads and identify potential performance and availability problems before they are experienced by end users.


Worlds Colliding: Why Big Data Changes How to Think about Enterprise Storage

Addison Snell, CEO, Intersect360 Research

Abstract

Addison Snell of Intersect360 Research will present an overview of how Big Data trends have changed some fundamental drivers in acquiring, architecting, and administering enterprise storage. With the majority of Big Data implementations coming from in-house development — Hadoop is just the tip of the iceberg — storage developers will find themselves taking on new roles that are defined by performance and scalability as much as reliability and uptime. Learn why high performance computing technologies like parallel file systems and InfiniBand could cross the Rubicon into enterprise, while an IT darling like Cloud might not play.


LONG TERM RETENTION

Combining SNIA Cloud, Tape and Container Format Technologies for the Long Term Retention of Big Data

Sam Fineberg, Distinguished Technologist, HP
Simona Rabinovici-Cohen, Research Staff Member, IBM

Abstract

Generating and collecting very large data sets is becoming a necessity in many domains that also need to keep that data for long periods. Examples include astronomy, atmospheric science, genomics, medical records, photographic archives, video archives, and large-scale e-commerce. While this presents significant opportunities, a key challenge is providing economically scalable storage systems to efficiently store and preserve the data, as well as to enable search, access, and analytics on that data in the far future. Both cloud and tape technologies are viable alternatives for storage of big data and SNIA supports their standardization. The SNIA Cloud Data Management Interface (CDMI) provides a standardized interface to create, retrieve, update, and delete objects in a cloud. The SNIA Linear Tape File System (LTFS) takes advantage of a new generation of tape hardware to provide efficient access to tape using standard, familiar system tools and interfaces. In addition, the SNIA Self-contained Information Retention Format (SIRF) defines a storage container for long term retention that will enable future applications to interpret stored data regardless of the application that originally produced it. This tutorial will present advantages and challenges in long term retention of big data, as well as initial work on how to combine SIRF with LTFS and SIRF with CDMI to address some of those challenges. SIRF for the cloud will also be examined in the European Union integrated research project ForgetIT – Concise Preservation by combining Managed Forgetting and Contextualized Remembering.

Learning Objectives

  • Importance of long term retention
  • Challenges in long term retention
  • Learn about SIRF
  • Learn how SIRF works with tape and in the cloud

Best Practices, Optimized Interfaces, API’s designed for Storing Massive Quantities of Long Term Retention Data

Stacy Schwarz-Gardner, Strategic Technical Architect, Spectra Logic

Abstract

The growth, access requirements and retention needs for data in a mass storage infrastructure for HPC, life sciences, media and entertainment, higher education and research are becoming unmanageable. Organizations continue to utilize legacy methodologies to manage Big Data Growth of today and it is not working. Traditional storage tiering and backups do not solve the problem and create additional cost and overhead. Redefining the term “Archive” as an online, accessible, affordable data management platform decoupled from infrastructure will be required to solve data growth and retention challenges going forward. Leveraging new optimized interfaces and API’s for disk, tape, and cloud will be required to fully enable the Active Archive experience.

Learning Objectives

  • Understand how active archive technologies work and how companies are using them to gain data assurance and cost effective scalability for archived data.
  • Learn the implications of data longevity and planning considerations for long-term retention and accessibility.
  • An overview of new, innovative interfaces and api’s designed to better optimize disk, tape, and cloud storage medium for archive purposes.

NEW THINKING

Screaming Fast Galois Field Arithmetic Using Intel SIMD Instructions

Ethan Miller, Director of the NSF Industry/University Cooperative Research Center, Associate Director of the Storage Systems Research Center (SSRC), University of California

Abstract

Galois Field arithmetic forms the basis of Reed-Solomon and other erasure coding techniques to protect storage systems from failures. Most implementations of Galois Field arithmetic rely on multiplication tables or discrete logarithms to perform this operation. However, the advent of 128-bit instructions, such as Intel’s Streaming SIMD Extensions, allows us to perform Galois Field arithmetic much faster. This talk outlines how to leverage these instructions for various field sizes, and demonstrates the significant performance improvements on commodity microprocessors. The techniques that we describe are available as open source software.


NV-Heaps: Making Persistent Objects Fast and Safe with Next-Generation Non-Volatile Memories

Joel Coburn, Software Engineer, Google/UCSD

Abstract

Persistent, user-defined objects present an attractive abstraction for working with non-volatile program state. However, the slow speed of persistent storage (i.e., disk) has limited their performance. Fast, byte-addressable, non-volatile technologies, such as phase change memory, will remove this constraint and allow programmers to build high-performance, persistent structures in non-volatile storage that is almost as fast as DRAM. However, existing persistent object systems are ill-suited to these memories because the assumption that storage is slow drives many aspects of their design. Creating structures that are flexible and robust in the face of application and system failure, while minimizing software overheads, is challenging. The system must be lightweight enough to expose the performance of the underlying memories, but it also must avoid familiar bugs such as dangling pointers, multiple free()s, and locking errors in addition to unique types of hard-to-find pointer safety bugs that only arise with persistent objects. These bugs are especially dangerous since any corruption they cause will be permanent.

We have implemented a lightweight, high-performance persistent object system called NV-heaps that prevents these errors and provides a model for persistence that is easy to use and reason about. We implement search trees, hash tables, sparse graphs, and arrays using NV-heaps, BerkeleyDB, and Stasis. Our results show that NV-heap performance scales with thread count and that data structures implemented using NV-heaps out-perform BerkeleyDB and Stasis implementations by 32x and 244x, respectively, when running on the same memory technology. We also quantify the cost of enforcing the safety guarantees that NV-heaps provides and measure the costs for NV-heap primitive operations.


LazyBase: Trading Freshness for Performance in a Scalable Database

Brad Morrey, Senior Research Scientist, HP Labs

Abstract

The LazyBase scalable database system is specialized for the growing class of data analysis applications that extract knowledge from large, rapidly changing data sets. It provides the scalability of popular NoSQL systems without the query-time complexity associated with their eventual consistency models, offering a clear consistency model and explicit per-query control over the trade-off between latency and result freshness. With an architecture designed around batching and pipelining of updates, LazyBase simultaneously ingests atomic batches of updates at a very high throughput and offers quick read queries to a stale-but-consistent version of the data. Although slightly stale results are sufficient for many analysis queries, fully up-to-date results can be obtained when necessary by also scanning updates still in the pipeline. Compared to the Cassandra NoSQL system, LazyBase provides 4X--5X faster update throughput and 4X faster read query throughput for range queries while remaining competitive for point queries. We demonstrate LazyBase's tradeoff between query latency and result freshness as well as the benefits of its consistency model. We also demonstrate specific cases where Cassandra's consistency model is weaker than LazyBase's.


GraphChi: Large-Scale Graph Computation on Just a PC

Aapo Kyrola, Ph.D. student, CMU Computer Science Department

Abstract

In "GraphChi: Large-Scale Graph Computation on Just a PC" at OSDI '12, we proposed Parallel Sliding Windows (PSW), a novel method for efficiently processing large graphs from external memory (disk). Based on PSW, we designed and implemented a complete system, GraphChi, for vertex-centric graph computation. We demonstrated that GraphChi is capable of solving even the biggest graph computation problems on just a single PC, with performance often matching distributed computation frameworks.

In this talk I discuss the motivations for single-computer computation, present the GraphChi system and its design and talk about some recent work for extending and improving GraphChi, including a novel random walk engine DrunkardMob (to be presented in ACM RecSys'13). I will also talk about challenges of graph computation on general level and discuss future directions of my research.


NFS

NFS on Steroids: Building Worldwide Distributed File System

Gregory Touretsky, Solutions Architect, Intel

Abstract

Intel R&D environment spans dozens of locations across the Globe. It includes over 50,000 compute servers running Linux, and 10s of PBs of centralized NFS storage. NFS provides good solution for data sharing within data center. However, it doesn't necessarily give an answer for cross-site access over high latency links. The presentation will share Intel IT experience with development, implementation and adoption of the NFS-based federated distributed secure storage infrastructure, where every file is accessible from any client worldwide. It will describe multiple steps required to enable global multi-level on-demand caching infrastructure, including new caching and manageability solutions, environment standardization and more. Improvements made in data transfer and storage synchronization protocols will also be covered.

Learning Objectives

  • Global data sharing challenges in the large Enterprise
  • Implications of RPCSEC-GSS implementation in the large enterprise
  • Implications of RPCSEC-GSS implementation in the large enterprise
  • Self-service manageability solutions required for end users - internal development and call for action

Implementing NFSv3 in Userspace: Design and Challenges

Tai Horgan, Software Engineer, EMC Isilon Storage Division

Abstract

NFS in usermode is an uncommon challenge, one which necessitates unique design features not present in most UNIX implementations. EMC Isilon’s NFS team has been tasked with moving the kernel-based NFS server previously implemented in OneFS with one in userspace in order to take advantage of a new protocol-agnostic access auditing framework. This talk will serve as a postmortem discussion and case study of EMC Isilon’s new usermode NFSv3 server. We will discuss some of the challenges we encountered while redesigning the server for its new environment, future-proofing our design, and maintaining common code with SMB, NFSv4, and other protocols.

Learning Objectives

  • Advantages of user mode over kernel mode for Isilon in particular, and clustered storage in general
  • Userland challenges, including operation transactions, filehandle translation, and the RPC model
  • Design considerations inherent in sharing code with SMB and NFSv4

pNFS Directions

Matt Benjamin, Founder, Cohort LLC
Adam Emerson, Developer, CohortFS, LLC

Abstract

This session features leading pNFS developers describing ways in which pNFS is adapting to meet new challenges in distributed storage. New features are under discussion by the IETF NFS Working Group and new implementation platforms are emerging as pNFS adapts to market adoption. Some of these include: * Metadata scaling proposals * New front-and back-ends, including Ceph and Ganesha * Defining a software-defined storage layer * Integration with cloud storage APIs * Defining workloads for pNFS performance and stability measurement

Learning Objectives

  • Future requirements for pNFS and how storage architects and developers intend to meet them
  • How pNFS (and NFSv4) are responding to cloud storage opportunities
  • Consensus views on the challenges for pNFS in the near-, intermediate-, and long-term

pNFS, NFSv4.1, FedFS and Future NFS Developments

Tom Haynes, Ph.D., Sr. Engineer, NetApp

Abstract

The NFSv4 protocol undergoes a repeated life cycle of definition and implementation. The presentation will be based on years of experience implementing server-side NFS solutions up to NFSv4.1, with specific examples from NetApp and others. We'll examine the life cycle from a commercial implementation perspective; what goes into the selection of new features (including FedFS and NFSv4.2 and NFSv4.3), the development process and how these features are delivered, and the impact these features have on end users. We'll also cover the work of Linux NFS developers and provide suggestions for file system developers based on these and vendor experiences; and finally, we'll discuss how implementation and end-user experience feeds back into the protocol definition, along with an overview of expected NFSv4.2 features

Learning Objectives

  • Understand the NFS protocol & its application to modern workloads
  • How NFSv4.1 is being implemented by vendors and end users
  • The differences between NFSv3 and NFSv4.1, pNFS, FedFS
  • An overview of proposed features in NFSv4.2 and NFSv4.3

OBJECT STORAGE

Architecting Block and Object Geo-replication Solutions with Ceph

Sage Weil, Founder & CTO, Inktank

Abstract

As the size and performance requirements of storage systems have increased, file system designers have looked to new architectures to facilitate system scalability. Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance, and scalability from terabytes to exabytes. The Ceph architecture was initially designed to accommodate single data center deployments, where low latency links and synchronous replication were an easy fit for a strongly consistent data store. For many organizations, however, storage systems that span multiple data centers and geographies for disaster recovery or follow-the-sun purposes are an important requirement. This talk will give a brief overview of the Ceph architecture, and then focus on the design and implementation of asynchronous geo-replication and disaster recovery features for the RESTful object storage layer, the RBD block service, and Ceph's underlying distributed object store, RADOS. The fundamental requirements for a robust georeplication solution (like point in time consistency) and the differing requirements for each storage use-case and API and the implications for the asynchronous replication strategy will be discussed.

Learning Objectives

  • An overview of the Ceph architecture
  • Info on the design and implementation of asynchronous geo-replication and disaster recovery features for the RESTful object storage layer, the RBD block service, and Ceph's underlying distributed object store, RADOS.
  • The fundamental requirements for a robust geo-replication solution (like point in time consistency) and the differing requirements for each storage use-case and API and the implications for the asynchronous replication strategy

Transforming PCIe-SSDs and HDDs with Infiniband into Scalable Enterprise Storage

Dieter Kasper, Principal Architect, Fujitsu

Abstract

Developers and Technologists are fascinated by the low latency and high IOPS of PCIe-SSDs. But, customers expect a healthy balance between performance and enterprise features such as high availability, scalability, elasticity and data management. The open source distributed storage solution Ceph, designed for reliability, performance, and scalability in combination with commodity hardware as Infiniband, PCIe-SSDs and HDDs will merge into a perfect team, if the best I/O parameters and the right interconnect protocols are used and tuned.

Learning Objectives

  • Get an overview on scale-out flash storage
  • Learn how to transform Open Source Software and commodity hardware into Scalable Enterprise Storage
  • Learn about the right I/O subsystem parameters to access SSDs
  • Lessons learned choosing the right interconnect protocol for low latency and high bandwidth

Huawei SmartDisk Based Object Storage UDS

Qingchao Luo, Cloud Storage Architect, Huawei

Abstract

HUAWEI UDS(Universal Distributed Storage) is a massive object-storage system that offers enterprises and service providers a comprehensive solution to data explosion difficulties. Through its proprietary technology, Huawei UDS system in one Data Center is able to scale out to 25k SmartDisk(s), which is composed of one ARM chip and one hard drive. As a result, it provides competitive scalability, reliability, and cost.


PERFORMANCE/WORKLOAD

Improvements in Storage Energy Efficiency via Storage Subystem Cache and Tiering

Chuck Paridon, Storage Performance Architect, HP
Herb Tanzer, Storage Hardware Architect, HP

Abstract

The energy efficiency of storage subsystems in terms of Idle Capacity/Watt, IOPs/Watt, and MB/s/Watt can be significantly improved through the deployment of Capacity Optimization Methods (COMs). These features affect the apparent capacity, IO rate and throughput (MB/s) and therefore also the target “green” metrics cited above. This paper describes a case study of the compound effect of two features, that of storage subsystem cache and tiered storage on the primary metrics of the SNIA Emerald Power Efficiency Specification using both the former random workloads and the recently adopted “Hot Band” Workload as the comparative test stimuli. Also described is the potential energy efficiency benefit of several additional COM types

Learning Objectives

  • Compare the characteristics of the Hot Band, cache friendly workload with that of a completely random stimulus
  • Quantify the performance benefits of adequate storage subsystem cache when exposed to a cache friendly workload
  • Describe the effect of cache assistance on the workload as seen by the “back end” or traditional spinning media portion of the storage
  • Quantify the subsequent deployment of storage tiering on the overall performance measurements and associated Emerald Power Efficiency Metrics

SPEC SFS Benchmark - The Next Generation

Spencer Shepler, Architect, Microsoft

Abstract

The SPEC SFS Benchmark has long been the standard in the file sharing industry for characterizing the performance capabilities of file servers. First with the NFSv2 protocol, then NFSv3, and eventually NFSv3 and CIFS. Even with its success, the benchmark has long needed an update to address issues of new protocols such as SMB3 and NFSv4. It also has needed to address the need to include the client in the measurement of the systems overall capabilities. The SPEC SFS committee has been working on an update to address these and other areas of needed improvement. The basic structure and capability of the in-development benchmark will be presented along with a description of the workloads as of fall 2013. Latest status of the benchmark development will be provided along with some potential uses outside of the traditional vendor usage for performance measurement.

Learning Objectives

  • Performance measurement with SPEC SFS
  • Capacity planning with SPEC SFS

Lessons Learned Tuning HP's SMB Server with the FSCT Benchmark

Bret McKee, Distinguished Technologist, Hewlett Packard
Vinod Eswaraprasad, Lead Architect, WiPro Technologies

Abstract

As part of an ongoing effort to increase performance of our NAS system, we have been tuning the system’s performance on the FSCT benchmark. As part of this effort, we have learned a number of things about the benchmark which might be of general interest. Areas which will be discussed include an overview of the benchmark, setup and configuration steps required to run the benchmark and insight into understanding the results and errors generated by running FSCT. The intention is to discuss “what we learned about the benchmark while using it to make our system faster”, including what SMB protocol elements it uses, what impact newer SMB features like leases have, etc.

Learning Objectives

  • Understand what the FSCT benchmark is, and what it measures
  • Understand the roles of each system involved in the FSCT benchmark
  • Be aware of the various FSCT scenario types and the requests FSCT makes
  • Be able to interpret the data output by the FSCT benchmark

Forget IOPS: A Proper Way to Characterize & Test Storage Performance

Peter Murry, Senior Product Specialist, SwiftTest

Abstract

Storage workloads are changing. Applications stress storage infrastructure in different ways – like workloads generated by virtualized applications or ones containing high amounts of meta-data. Such dynamics make it difficult to confidently predict how storage systems will behave in the real world for both end users and vendors. How can storage performance be better understood and defined?

Learning Objectives

  • Why IOPs testing alone is an inadequate method for characterizing storage performance, and how the inclusion of metadata traffic is a critical component of characterizing storage networking performance
  • How to use real-world, end user data to characterize a variety of workloads that stress storage systems in very specific ways. This includes using appropriate levels of Write/Read operations in combination with metadata as well as how to characterize application storage performance in virtualized environments
  • A method to test storage systems against such workloads

High-Throughput Cloud Storage Over Faulty Networks

Yogesh Vedpathak, Software Developer, Cleversafe Inc

Abstract

Storage systems increasingly rely on the Internet as the medium of data transfer. The Internet, as a high-bandwidth, high-latency, high-packet loss connection is very different from the clean networks of typical SANs. Under such conditions, TCP’s capabilities are often stretched to their breaking point. In this presentation we detail the methods we used to overcome random network slowdowns, packet drops, congestion control, and other challenges. Our result: we achieved storage throughputs on the Internet that were 80% that of the same test on a low-latency, zero packet loss LAN.

Learning Objectives

  • Why the network layer is increasingly important in today's storage systems, as cloud storage takes off, and the NIC and Internet speeds begin to eclipse the speed of a local hard drive.
  • Features and limitations of TCP, including congestion control, window scaling, error handling, order preservation and how they pertain to modern cloud based storage systems.
  • How to achieve reliable low-latency delivery of messages over networks with unpredictable reliability and performance, in ways that are easy to implement, manageable and widely supported.

SECURITY

Multi-vendor Key Management – Does It Actually Work?

Tim Hudson, Technical Director, Cryptsoft Pty Ltd

Abstract

A standard for interoperable key management exists but what actually happens when you try to use products and key management solutions from multiple vendors? Does it work? Are any benefits gained? Practical experience from implementing the OASIS Key Management Interoperability Protocol (KMIP) and from deploying and interoperability testing multiple vendor implementations of KMIP form the bulk of the material covered. Guidance will be provided that covers the key issues to require that your vendors address and how to distinguish between simple vendor tick-box approaches to standard conformance and actual interoperable solutions.

Learning Objectives

  • In-depth knowledge of the core of the OASIS KMIP
  • Awareness of requirements for practical interoperability
  • Guidance on important of conformance testing

Matching Security to Data Threats – More is Not Better, but Less Can be Bad

Chris Winter, Director Product Management, SafeNet

Abstract

A chain is only as strong as its weakest link and adding more links doesn’t make it any stronger. The same is true for securing critical data with encryption – just adding more encryption doesn’t necessarily make critical data more secure. The challenges facing most organizations are twofold: 1) understanding which threats and vulnerabilities apply to them and their data, and 2) knowing when they have sufficient data encryption to protect them from the threats, but not so much that their costs and management resources are strained. It is additionally important to understand that not all threats can be addressed by data encryption and that some threats may have to be rationalized by an organization in terms of the cost of the remedial work.

Learning Objectives

  • As a result of participating in this session, attendees will be able to understand which threats can and should be addressed by data encryption and which threats need other solutions
  • As a result of participating in this session, attendees will be able to understand where and when multiple data encryption technologies are complementary, and where they are redundant
  • As a result of participating in this session, attendees will be able to understand how to justify the deployment of data encryption based on cost of deployment, management and the cost of failure to do so

Trusted Computing Technologies for Storage

Dr. Michael Willett, Storage Security Strategist, Samsung

Abstract

The Trusted Computing Group (TCG) has created specifications for trusted computing, with a focus on ease of use, transparency, robust security functions in hardware, integration into the computing infrastructure, and inexpensive; including Self-Encrypting Drives (SED). TCG technologies will be described, including application to the design of trusted storage.

Learning Objectives

  • Overview of the security challenges facing storage
  • Business and compliance motivations for applying trusted computing to storage, including breach notification legislation
  • Introduction to the technical specifications of the Trusted Computing Group, especially as they apply to storage
  • Technical details of Self-Encrypting Drives (SED)

SMB

SMB3 Meets Linux: The Linux Kernel Client

Steven French, Senior Engineer - SMB3 Protocol Architect, IBM

Abstract

SMB3 support has been merged into Linux ever since the 3.8 kernel. What have we learned? And Why use SMB3? Learn what SMB3 features are available to Linux users, and when you should consider using SMB3 instead of other protocols when running Linux, and how to configure optional features and improve performance. In addition, there will be demonstration of some of the newer features, and a description of what improvements are expected in the coming months.

Learning Objectives

  • Why use SMB3 on Linux
  • Which features of SMB3 are enabled on Linux
  • How to better configure the Linux CIFs/SMB2/SMB3 client
  • Learn how to use SMB3 on LInux to access resources

Pike - Making SMB Testing Less Torturous

Brian Koropoff, Consulting Software Engineer, EMC Isilon

Abstract

Pike is a new Python protocol testing library for SMB2 and SMB3. It will be made publicly available along with a collection of tests under an open-source license. Pike has a simple and extensible architecture. It aims to make common case scenarios concise while still allowing deep control over message construction and dispatch when necessary. This talk will explore the core architecture of Pike and how to extend it with new features and tests. Along the way, we'll see how the dynamism and expressiveness of Python make it a great environment for protocol testing.

Learning Objectives

  • How Pike is designed
  • How to extend Pike to support new protocol messages (or new protocols)
  • How to write tests with Pike

Mapping SMB onto Distributed Storage

Christopher R. Hertel, Senior Principal Software Engineer, Red Hat
José Rivera, Software Engineer, Red Hat

Abstract

The SMB protocol is, and always has been, an extension of the Operating Systems and File Systems on which it was designed to run. Yes, that means DOS/FAT, OS2/HPFS, and Windows/NTFS. Building an SMB server on any other platform requires a lot of special handling, sort of like the special pounding of a square peg into a round hole with a finely tuned sledgehammer. The design of many newer file systems, particularly object, cluster, and distributed storage systems, complicates matters even more by "relaxing" adherence to standard semantics. This presentation will highlight a number of ways in which these "relaxed fit" file systems clash with SMB expectations, and will provide some examples of ways to bridge the gap.

Learning Objectives

  • Problems to look for when integrating SMB with a Distributed File System.
  • SMB integration problem solving approaches.
  • The importance of itemizing integration SMB problems.

Samba Scalability and Performance Tuning

Volker Lendecke, Co-Founder of SerNet, Samba Team/SerNet

Abstract

During the last few months a lot of work has been spent on making Samba scale well for large numbers of clients. In particular together with ctdb a few bottlenecks were discovered that were surprising for the developers. Luckily, most of these bottlenecks could be fixed. This talk will present the details of our improved scalability in Samba.

Learning Objectives

  • Performance testing and tuning of network servers

1 S(a) 2 M 3 B(a) 4

Michael Adam, Senior Software Engineer, SerNet GmbH

Abstract

Samba 4.0 has been released in December 2012. It is is the first release of Samba that features the Active Directory Compatible Domain Controller. But it is also a very important file server release: Samba 4.0 ships with SMB 3 enabled by default. In my SDC 2012 talk, I described the tasks for Samba to implement many of the features of SMB. In this talk, I will describe the subset of SMB3 that Samba 4.0 already offers, and report about the work in progress in the implementation of more SMB 3 features, giving an outlook of what to expect in upcoming Samba 4.1.

Learning Objectives

  • Learn about Samba 4.0 in general
  • Learn about SMB 3 support in Samba 4.0
  • Learn about current status of SMB 3 development for Samba 4.1 and later releases

Implementation of SMB3.0 in Scale-Out NAS

Kalyan Das, Chief Architect - Storage Protocols, Huawei Technologies
Jun Liu, Software Architect - Storage and NAS, Huawei Technologies

Abstract

SMB 3.0 features several improvements over the CIFS protocol on which it is based. Continuous Availability achieves high storage availability through transparent failover. Copy Offload and RDMA boost performance dramatically in Windows Server 2012. These features, along with Multi-channels, are quite attractive to customers. However, these features together pose implementation challenges in a scale-out NAS context. We discuss our experience implementing SMB3 on a clustered system without compromising functionality or performance.

Learning Objectives

  • Huawei’s implementation of Offload Data Transfer (ODX) through SMB Copy Offload
  • Huawei’s use of cluster-wide locking through its proprietary Outer Lock (OL) service to achieve SMB3.0 transparent failover
  • Test results on multi-channel, ODX, [and maybe SMB over RDMA]

SMB Direct Update

Greg Kramer, Sr. Software Engineer, Microsoft

Abstract

This talk will explore upcoming changes to the SMB 3 protocol that increase SMB Direct performance for high IOP workloads. The protocol changes will be motivated by performance analyses, including updated SMB Direct performance results for a variety of IO workloads.


SMB3 Update

David Kruse, Development Lead, Microsoft

Abstract

The past year has seen multiple companies and teams release SMB3 solutions, and many customers deploy them into production. This talk will look at some upcoming minor adjustments to SMB3 based on lessons learned, and cast forward for what might come next.

Learning Objectives

  • Learn more about SMB3
  • Discuss some potentially new and interesting applications for SMB3

Scaled RDMA Performance & Storage Design with Windows Server SMB 3.0

Dan Lovinger, Principal Software Design Engineer, Microsoft
Spencer Shepler, Principal Program Manager, Microsoft

Abstract

This session will present a summary of the performance of the Windows Server 2012 File Server’s Remote Direct Memory Access capabilities over SMB 3.0. Systems presented will range from production “Windows Server Cluster in a Box” from EchoStreams to rack-scaled storage systems, across multiple RDMA solutions. Design considerations for highly scaled systems and their tradeoffs will be discussed. With RDMA the processor cost of bulk data access on remote file systems has the potential to approach the range of local storage. This provides a novel build option for deploying high speed and highly efficient consolidated storage solutions.


Exploiting the High Availability Features in SMB 3.0 to Support Speed and Scale

James Cain, Principal Software Architect, Quantel Ltd

Abstract

Microsoft have made massive changes in version 3.0 of SMB protocol, many of which contribute towards SMB 3.0 offering high availability (HA) for use in the data centre. This talk will present the results of investigations into how these innovations can be exploited for improved I/O speed & architectural scale. The presentation will first look at the needs of HA in a NAS protocol. It will then offer an insight into why features were added to SMB 3.0, providing technical analysis of the protocol itself and live demonstrations of noted features using the authors own implementation of an SMB server.

Learning Objectives

  • Understanding SMB 3.0 at an architectural level
  • Practical insight into network topologies for NAS
  • Exploring requirements for High Availability and design choices made in SMB 3.0
  • Finding the minimal shared state between SMB servers in a cluster

Samba 4.0 released: What now for the Open Source AD Domain Controller?

Andrew Bartlett, Samba Developer, Samba Team

Abstract

Coming Soon


A Status Report on SMB Direct (RDMA) for Samba

Richard Sharpe, Samba Team Member, Panzura

Abstract

Since Microsoft announced SMB Direct there has been interest in providing support for SMB DIrect under Samba. This presentation will describe the current state of the project to provide that support. It will discuss the process that we have undertaken, the players, and what we have working todate.

Learning Objectives

  • Obtain more information about the state of this project
  • Understand the alternatives we have looked at
  • Understand the technical difficulties involved

SOFTWARE DEFINED X

Hosting Performance-Sensitive Applications in the Cloud with Software-Defined Storage

Felix Xavier, Co-Founder and CTO, CloudByte

Abstract

Hosting performance-sensitive enterprise applications requires the delivery of guaranteed quality of service (QoS), which has been Achilles’ heel of large cloud service providers. So, what stops legacy solutions from delivering guaranteed QoS? Noisy neighbors! Within a shared storage platform, legacy solutions cannot isolate and dedicate a specific set of resources to any application. As a result, applications are in a constant struggle for the shared storage resources. This session will look at different storage options with a focus on software-defined storage solutions that help solve the noisy neighbor problem and guarantee QoS to every application in a shared storage environment.

Learning Objectives

  • Truly understand the software-defined storage approach and if it is a fit for your environment with a focus on cloud infrastructures
  • On-demand performance: learn how to use software-defined storage solutions to automate resources in a shared environment based on QoS demands
  • Look at ways to automate and get QoS analytics with unprecedented granularity, right down to the storage volume layer

Software-Defined Network Technology and the Future of Storage

Stuart Berman, Chief Executive Officer, Jeda Networks

Abstract

Data is increasing exponentially and storage budgets are not keeping pace. IT staffs are increasingly under pressure to squeeze efficiencies at every turn while aligning with strategic business goals. Fortunately, new technologies by way of Software-Defined Networks (SDN) are having a liberating effect on network architectures. SDN abstracts control out of the switch into the software, accelerating server and desktop virtualization. A subset of SDN called Software Defined Storage Networking (SDSN) brings virtualization to storage networking architecture. New, standardized network technologies have for the first time, allowed us to virtualize the SAN using storage-focused SDN. We will examine the basics of SDSN and how it virtualizes the way servers communicate with storage to deliver new levels of agility and scalability to the modern enterprise.

Learning Objectives

  • Truly understand the software-defined storage approach and if it is a fit for your environment with a focus on cloud infrastructures
  • On-demand performance: learn how to use software-defined storage solutions to automate resources in a shared environment based on QoS demands
  • Look at ways to automate and get QoS analytics with unprecedented granularity, right down to the storage volume layer

Tunneling SCSI over SMB: Shared VHDX files for Guest Clustering in Windows Server 2012 R2

Jose Barreto, Principal Program Manager, Microsoft

Abstract

Windows Server Failover Clustering is a well-established technology for increasing application availability in the Microsoft platform. For Hyper-V virtualized workloads, you can also create a Failover Cluster comprised of a set of virtual machines and some shared storage in the form of iSCSI LUNs, virtual Fibre Channel LUNs or SMB file shares. In this session, we’ll start by describing the overall Guest Clustering scenario and then dive into the new Windows Server 2012 R2 option to use Shared VHDX files. We’ll then introduce the “Remote Shared Virtual Hard Disk Protocol” (the new protocol behind the Shared VHDX feature) and explain how this protocol allows SCSI commands to be tunneled over the SMB protocol. We’ll also cover how non-Windows SMB implementations can offer the Shared VHDX capability via this new protocol.

Learning Objectives

  • Review the scenarios and requirements for guest high availability using Windows Server Failover Clustering inside a virtual machine
  • Describe the new Shared VHDX feature in Windows Server 2012 R2 for Guest Clustering
  • Explain how the Remote Shared Virtual Hard Disk Protocol allows SCSI commands to be tunneled over the SMB protocol

Defining Software Defined Storage

Lazarus Vekiarides, Entrepreneur and Technology Executive

Abstract

The notion of data services being comprised of software has natural appeal, but what exactly does it mean? Given a huge portfolio of software and hardware that is available for a datacenter today, it is difficult to make sense of what "software defined storage" truly is and what benefits it could provide. While there is some truth to the idea that it is about reducing reliance on costly hardware, many see it as a way to bring new flexibility to datacenter operations. In this discussion, we will propose a set of requirements and benefits, while walking through some examples of various software technologies with the goal of producing a crisp definition.

Learning Objectives

  • Learn to better define software defined storage and separate the reality from the marketing buzz
  • Learn about the benefits of software defined storage, and how it can bring new flexibility to your datacenter
  • Take a look at the future of software defined storage and how this movement is changing the game

SOLID STATE STORAGE

Delivering Nanosecond-Class Persistent Storage

Steffen Hellmold, Vice President of Marketing, Everspin Technologies

Abstract

NAND flash solves many problems in storage with its non-volatility and high IOPS performance. Designers can always deliver more IOPS with NAND assuming unlimited power and space. However, designers can’t deliver nanosecond-class response times with NAND because the medium isn’t fast enough. Spin Torque MRAM complements NAND flash and forms a persistent replacement for battery or capacitor-backed DRAM, delivering higher IOPS/$ and IOPS/W than NAND flash with nanosecond-class response times. In this presentation, the speaker will discuss how ST-MRAM is enabling a latency revolution in storage, just as NAND flash delivered an IOPS revolution.

Learning Objectives

  • ST-MRAM, introducing high performance persistent storage
  • How ST-MRAM complements NAND flash and replaces BB-DRAM
  • Applications for ST-MRAM, exploring the possibilities with prototypes

NVMe Based PCIe SSD Validation – Challenges and Solutions

Apurva Vaidya, Technical Architect, iGATE

Abstract

PCI express (PCIe) solid state drives (SSDs) provide significant performance benefits in enterprise applications as compared to traditional HDDs and SSDs with a legacy interface. The existing protocols [SAS, SATA] pose architectural limitations that prohibit them to deliver much desired throughput for SSD. The ideal solution to this problem is to move these devices closer to PCIe space which would provide the optimum speed without adding the overheads posed by protocols like SAS and SATA. Emergence of non-volatile memory express (NVMe), a scalable host controller interface specifically developed for PCIe SSDs, and a supporting ecosystem, will allow SSD suppliers to transition to NVMe based PCIe SSD products. It is essential to understand the product validation challenges to reduce time to market for PCI-SSD vendors. This paper highlights the challenges in validating queue configuration, handling of outstanding IO's, queue arbitration, interrupt coalescing, etc and provides solutions to address these challenges.

Learning Objectives

  • NVMe based PCIe SSD subsystem validation setup and procedure
  • Challenges faced during validation
  • Solutions to address those challenges

TBF: A Memory-Efficient Replacement Policy for Flash-based Caches

Biplob Debnath, Research Staff Member, NEC Laboratories America

Abstract

TBF presents a RAM-frugal cache replacement policy that approximates the least-recently-used (LRU) policy. It uses two in-RAM Bloom filters to maintain the recency information and leverages an on-flash key–value store to cache objects. TBF could be easily integrated with any key-value stores to provide caching functionalities. TBF requires only one additional byte of RAM per cached object while providing similar performance as LRU and its variants, thus makes it suitable for implementing a very large flash-based cache. Full-paper link: http://www.nec-labs.com/~biplob/Papers/TBF.pdf

Learning Objectives

  • Understanding in-RAM metadata bottleneck for a huge flash-based second-level cache
  • Understanding the memory overhead of an LRU-like algorithm
  • Understanding how Bloom filter works
  • Understanding how to integrate in-RAM Bloom filter with an on-flash key-value store to reduce RAM consumption

SNIA NVM Programming Model

Paul von Behren, Software Architect, Intel Corporation

Abstract

Upcoming advances in Non-Volatile Memory (NVM) technologies will blur the line between storage and memory, creating a disruptive change to the way software is written. The new SNIA NVM Programming Model describes behavior provided by operating systems enabling applications, file systems, and other software to take advantage of new NVM capabilities. This tutorial describes four programming modes. Two modes address NVM extensions for NVM emulating hard disks; block mode (as use by file systems) and file mode (as use by most applications). There are also two modes for Persistent Memory (PM); kernel extensions (as used by PM-aware file systems) and PM file mode (as used by PM aware applications). The tutorial also addresses some broader NVM software issues, such as strategies for storing pointers in persistent memory.

Learning Objectives

  • Awareness of the new SNIA NVM Programming Model
  • Advantages to software utilizing NVM features
  • Motivations for NVM device vendors to support the model

STORAGE MANAGEMENT

Demand for Storage Systems from a Customer Viewpoint in Japan

Satoshi Uda, Assistant Professor, Japan Advanced Institute of Science and Technology (JAIST)

Abstract

We are providing storage services more than 20 years with large-scale NAS systems in JAIST for central data managing whole over our institute. Recently, construction of the private cloud environment based on virtualization technology complicate the dependency structure of systems, and also makes it increasingly difficult to operate our storage systems. In this presentation, we talk about case study and demands for storage systems from a customer viewpoint, based on our knowledge from deploying and operating our storage systems. Furthermore, the non-technical matter is also important in operating storage systems, i.e. we need to get good support for trouble shooting. We mention the consideration from this point with story in Japan. Note: This presentation will be jointly-conducted with SNIA-J (SNIA Japan Branch).

Learning Objectives

  • case study
  • customers' voice

Prequel: Distributed Caching and WAN Acceleration

Jose Rivera, Software Engineer, Red Hat
Christopher R. Hertel, Senior Principal Software Engineer, Red Hat

Abstract

Prequel is an open-source implementation of PeerDist, a protocol for wide-area distributed caching developed by Microsoft. PeerDist is more commonly known as Microsoft's BranchCache feature. Prequel seeks to bring PeerDist into the open source world and integrate its capabilities with data access projects like Samba. This talk will cover the basics of the PeerDist protocol, currently deployed scenarios, and illustrate scenarios which Prequel can serve. We'll demonstrate a working Prequel installation on Linux serving Windows peers and walk through the protocol step by step. Questions and heckling will be taken throughout the presentation.


A Method to Back up and Restore Configuration Settings for each and every Component in the SAN Environment using SMI-S and also Replicate the Configuration on Clean Setups with the Same or Similar Components

Dhishankar Sengupta, Test Architect, NetApp

Abstract

As of today all disaster recovery solutions getting shipped communicates with APIs from different vendor devices to retrieve information and perform management oriented operations on it. To perform these operations it is very important the solution is interoperable with the APIs provided by the vendors. The method proposed in this paper overcomes the issues arising from dependency on the vendor APIs. The solution is a software stack that backs up the configuration on devices participating in a SAN and upon failure of a site or a device in a site the solution bears the capability of replicating the same configuration that existed on the previous site/device on the new site/device. The solution is based on SNIA standards using the SMI providers developed by each of the device vendors. Since the SMI providers are built on a standard CIM model to which all devices seek compliance to; the interoperability factor between the individual devices is overcome and developing a solution comprising of different vendor products becomes lot easier, enhanced request/response time and cost effective.

Learning Objectives

  • How to easily migrate logical configuration/data from any Storage Vendor to any storage platforms of a different vendor
  • How to enable replication of the configuration settings from a NAS system to a SAN system thereby allowing the customer to migrate from a NAS setup to a BLOCK array based on any changes in the application environment, without much fuss
  • How administrators can use the SMI-S standard to create back up objects or restoration objects irrespective of Vendor mismatch or any other mismatches on both the setups to be backed up and the setup on which the restoration is to happen

STORAGE UTILITIES

Building the Public Storage Utility

Wesley Leggette, Software Architect, Cleversafe, Inc.

Abstract

The holy grail of cloud storage is to make storage into a utility. That is, an ubiquitous, standard, public resource, in the same sense that electricity and tap water are today. What makes utility storage difficult is that unlike water or electricity each user's data is unique and private. In this presentation we propose a solution to this problem. Our proposed solution enables a global anonymous public storage service where the storage system has no knowledge about users, or the data or metadata they store. Yet each user has their own private and secure storage space. Further, we consider some of the payment options that exist within a fully anonymous storage utility.

Learning Objectives

  • Requirements of utility storage
  • How to secure data without knowledge of user identities
  • Methods for anonymous purchase of storage resources

TESTING METHODOLOGIES

Testing iSCSI / SCSI Protocol Compliance Using Libiscsi

Ronnie Sahlberg, Google

Abstract

Presenting the libiscsi iSCSI/SCSI test suite. There are many independent iSCSI/SCSI software targets on the market today but no real iSCSI/SCSI test suites. My experience from writing tests and testing many popular iSCSI/SCSI implementations has shown me there is a real need for a good test suite. The libiscsi userspace initiator library comes with a comprehensive iSCSI/SCSI test suite. This is the most complete and comprehensive open test suite available for SCSI target implementors today. This presentation will cover the structure of libiscsi, the testsuite, how to add more tests. We will look at source code and we will run a short demo against a software target. And we will talk about how this test suite will help your target be better. This presentation targets iSCSI target developers. It aiming to show a test suite that can be easily extended and applied in an automated regression test framework and the benefits it will bring to improve target quality.

Learning Objectives

  • SCSI/iSCSI testing and test suites
  • Automated protocol compliance testing for your development workflow
  • Creating SCSI protocol compliance tests.

Message Analysis and Visualization in Heterogeneous Environments

Paul Long, Senior Program Manager, Microsoft

Abstract

Microsoft Message Analyzer is the next generation tool for analyzing messages from almost any source. Diagnosis of heterogeneous systems has continued to evolve as we explore new ways to visualize information for any type of trace data, be it a text log file, comma or tab separated data, network capture, or ETW component. Discover how to import Samba debug logs directly or define Text Log adapters, then inspect, filter, and organize as structured data. Learn how to analyze your file systems interoperability with Windows without having to read documentation. Expand your understanding of the interactions by including Windows component-specific information to gain insight into deep protocol and system behaviors.

Learning Objectives

  • Diagnosing any type of log data
  • Discuss how visualization can aid troubleshooting
  • How to discover the inner workigns of Windows
  • Finding interoperability issues automatically

A Software Based Fault Injection Framework for Storage Server

Vinod Eswaraprasad, Lead Architect, Wipro Technologies
Smitha Jayaram, Wipro Technologies

Abstract

With increasing complexity of storage systems, the ability to gracefully handle errors at all layers of a storage server (array firmware, driver, file system, protocols) has become a key challenge to developers. This is crucial in scalable storage environment, where error handling has to be synchronized across multiple nodes. This makes software fault injection at various layers of the stack more important in storage development and testing. Currently there is no single infrastructure that allows selective injection of faults in a typical storage server implementation. While investigating this problem, we have studied available options and designed a framework that uses combination of Kprobe, frysk, at system and protocol layer, and custom firmware fault injection mechanism that can simulate transient and hard errors at various layers.

Learning Objectives

  • Typical Storage failures events
  • Simulating errors that cause Data Unavailability, Data Loss
  • Usage of open source tools in implementing fault injection framework in storage development
  • Robust and Fault tolerant storage stack design aspects

WORKLOADS

An SMB3 Engineer’s View of Windows Server 2012 Hyper-V Workloads

Gerald Carter, Sr. Consulting Software Engineer, EMC/Isilon

Abstract

Many traditional physical storage workloads are well understood. Quantifying how access patterns change when a hypervisor in inserted in between server applications and physical storage requires rethinking what is optimal for a NAS configuration. This presentation will examine several Hyper-V workloads from the perspective of an SMB3 implementer.

Learning Objectives

  • What is required set of SMB3.0 features to support a Hyper-V workload? How should I prioritize my SMB3 feature development schedule for Hyper-V?
  • How does the Hyper-V map guest I/O requests into SMB2/3 operations? What is the new distribution and size of requests?
  • What is the performance impact of SMB2/3 feature such as LargeMTU, SMBDirect, and the “File Level Trim” IoFsControl?

Direct NFS – Design Considerations for Next-gen NAS Appliances Optimized for Database Workloads

Gurmeet Goindi, Principal Product Manager, Oracle
Akshay Shah, Principal Software Engineer, Oracle

Abstract

NAS appliances have traditionally been a popular choice for shared storage as they support a standardized and mature NFS protocol and leverage inexpensive Ethernet networking. However, the NFS protocol and traditional NAS appliances are designed for general purpose file system storage. Database workloads are very unique in the kind of requirements they place on a storage system. Different database workloads can have very different response time or bandwidth requirements. Along with the traditional database requirements of atomicity and consistency; critical database systems also have strict uptime and high availability requirements. Database workloads have the ability to convey this information to the storage. This session will explore some novel ideas to help design the next generation of NAS appliances and integrate newer NFS protocols and breakthrough technologies such as flash storage for optimizing database performance.

Learning Objectives

  • Database Workloads and storage performance